Test Report: QEMU_macOS 19346

                    
                      a97ed275d9afb14524a68c67a981a32c27d545ab:2024-07-29:35563
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.32
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.09
36 TestAddons/Setup 10.15
37 TestCertOptions 10.11
38 TestCertExpiration 199.72
39 TestDockerFlags 12.31
40 TestForceSystemdFlag 10.16
41 TestForceSystemdEnv 10.16
47 TestErrorSpam/setup 9.85
56 TestFunctional/serial/StartWithProxy 10.01
58 TestFunctional/serial/SoftStart 5.25
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
70 TestFunctional/serial/MinikubeKubectlCmd 0.74
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.98
72 TestFunctional/serial/ExtraConfig 5.27
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.16
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.13
91 TestFunctional/parallel/CpCmd 0.29
93 TestFunctional/parallel/FileSync 0.07
94 TestFunctional/parallel/CertSync 0.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
108 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 101.62
109 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
110 TestFunctional/parallel/ServiceCmd/List 0.04
111 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
112 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
113 TestFunctional/parallel/ServiceCmd/Format 0.04
114 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/Version/components 0.04
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
127 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.29
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.12
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
136 TestFunctional/parallel/DockerEnv/bash 0.04
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 34.89
150 TestMultiControlPlane/serial/StartCluster 9.94
151 TestMultiControlPlane/serial/DeployApp 120.03
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
159 TestMultiControlPlane/serial/RestartSecondaryNode 54.26
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.91
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
164 TestMultiControlPlane/serial/StopCluster 1.98
165 TestMultiControlPlane/serial/RestartCluster 5.26
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
171 TestImageBuild/serial/Setup 9.93
174 TestJSONOutput/start/Command 9.99
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.27
206 TestMountStart/serial/StartWithMountFirst 9.97
209 TestMultiNode/serial/FreshStart2Nodes 9.83
210 TestMultiNode/serial/DeployApp2Nodes 116.39
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.08
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 50.46
218 TestMultiNode/serial/RestartKeepsNodes 8.39
219 TestMultiNode/serial/DeleteNode 0.1
220 TestMultiNode/serial/StopMultiNode 3.51
221 TestMultiNode/serial/RestartMultiNode 5.25
222 TestMultiNode/serial/ValidateNameConflict 20.48
226 TestPreload 10.01
228 TestScheduledStopUnix 10
229 TestSkaffold 12.19
232 TestRunningBinaryUpgrade 653.98
234 TestKubernetesUpgrade 17.44
248 TestStoppedBinaryUpgrade/Upgrade 582.72
258 TestPause/serial/Start 9.82
261 TestNoKubernetes/serial/StartWithK8s 9.79
262 TestNoKubernetes/serial/StartWithStopK8s 5.31
263 TestNoKubernetes/serial/Start 5.26
267 TestNoKubernetes/serial/StartNoArgs 6.68
269 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.58
270 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.92
271 TestNetworkPlugins/group/kindnet/Start 10.12
272 TestNetworkPlugins/group/auto/Start 9.97
273 TestNetworkPlugins/group/flannel/Start 9.91
274 TestNetworkPlugins/group/enable-default-cni/Start 9.92
275 TestNetworkPlugins/group/bridge/Start 10.02
276 TestNetworkPlugins/group/kubenet/Start 9.91
277 TestNetworkPlugins/group/custom-flannel/Start 9.83
278 TestNetworkPlugins/group/calico/Start 9.82
279 TestNetworkPlugins/group/false/Start 9.82
281 TestStartStop/group/old-k8s-version/serial/FirstStart 9.81
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.1
292 TestStartStop/group/no-preload/serial/FirstStart 9.83
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
297 TestStartStop/group/no-preload/serial/SecondStart 5.25
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 10.59
304 TestStartStop/group/embed-certs/serial/DeployApp 0.09
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
308 TestStartStop/group/embed-certs/serial/SecondStart 5.26
309 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
310 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
311 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
312 TestStartStop/group/embed-certs/serial/Pause 0.1
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.23
316 TestStartStop/group/newest-cni/serial/FirstStart 12.11
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.17
323 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
325 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
327 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
328 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/SecondStart 5.26
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (12.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-017000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-017000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (12.315116625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"092bf4f0-429f-40b8-a7ea-56fad9f5652e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-017000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff8ae6f8-22e4-4da8-95e8-804a552a545b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19346"}}
	{"specversion":"1.0","id":"9815bb3a-4d7a-4c23-9608-e30802ea33c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig"}}
	{"specversion":"1.0","id":"01ad2689-c1a5-4360-ae44-8e2e9fb69b02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"88289081-4ef7-46dc-aa12-57ba21ebb56b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9538ca60-e78e-45c7-9947-aa6d364bb07d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube"}}
	{"specversion":"1.0","id":"dc6d1315-2c12-43a7-8bdc-8f80453cea69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"f914a544-a58b-45be-a1ea-e7264685fe67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b31a0adb-7a52-4555-b1c2-8c9c4c8b212c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"ee27aa98-0795-44ce-b50e-6e2cb1cac9bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3387995e-4b03-4115-a8d7-7b493c6a2d5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-017000\" primary control-plane node in \"download-only-017000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"83c650f5-349f-4366-8c62-2ccfc94c3311","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a3542941-a0dd-486f-8cf3-9a6affac6ec9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108665a60 0x108665a60 0x108665a60 0x108665a60 0x108665a60 0x108665a60 0x108665a60] Decompressors:map[bz2:0x1400000eed0 gz:0x1400000eed8 tar:0x1400000ee80 tar.bz2:0x1400000ee90 tar.gz:0x1400000eea0 tar.xz:0x1400000eeb0 tar.zst:0x1400000eec0 tbz2:0x1400000ee90 tgz:0x14
00000eea0 txz:0x1400000eeb0 tzst:0x1400000eec0 xz:0x1400000eee0 zip:0x1400000eef0 zst:0x1400000eee8] Getters:map[file:0x14001388560 http:0x140000b4370 https:0x140000b43c0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"625b8e0b-00aa-4792-924c-38e8866c037b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:47:22.669599    7567 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:47:22.669732    7567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:47:22.669735    7567 out.go:304] Setting ErrFile to fd 2...
	I0729 16:47:22.669741    7567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:47:22.669867    7567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	W0729 16:47:22.669957    7567 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19346-7076/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19346-7076/.minikube/config/config.json: no such file or directory
	I0729 16:47:22.671282    7567 out.go:298] Setting JSON to true
	I0729 16:47:22.689727    7567 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4609,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:47:22.689814    7567 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:47:22.693917    7567 out.go:97] [download-only-017000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:47:22.694081    7567 notify.go:220] Checking for updates...
	W0729 16:47:22.694124    7567 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 16:47:22.698236    7567 out.go:169] MINIKUBE_LOCATION=19346
	I0729 16:47:22.699951    7567 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:47:22.704868    7567 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:47:22.708853    7567 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:47:22.715951    7567 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	W0729 16:47:22.722853    7567 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 16:47:22.723065    7567 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:47:22.726945    7567 out.go:97] Using the qemu2 driver based on user configuration
	I0729 16:47:22.726963    7567 start.go:297] selected driver: qemu2
	I0729 16:47:22.726976    7567 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:47:22.727032    7567 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:47:22.732040    7567 out.go:169] Automatically selected the socket_vmnet network
	I0729 16:47:22.737223    7567 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 16:47:22.737322    7567 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:47:22.737386    7567 cni.go:84] Creating CNI manager for ""
	I0729 16:47:22.737405    7567 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 16:47:22.737457    7567 start.go:340] cluster config:
	{Name:download-only-017000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:47:22.741352    7567 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:47:22.743355    7567 out.go:97] Downloading VM boot image ...
	I0729 16:47:22.743371    7567 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 16:47:27.725360    7567 out.go:97] Starting "download-only-017000" primary control-plane node in "download-only-017000" cluster
	I0729 16:47:27.725383    7567 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:47:27.786483    7567 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:47:27.786489    7567 cache.go:56] Caching tarball of preloaded images
	I0729 16:47:27.786635    7567 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:47:27.791172    7567 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 16:47:27.791179    7567 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:47:27.878373    7567 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:47:33.948214    7567 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:47:33.948372    7567 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:47:34.642812    7567 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 16:47:34.643013    7567 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/download-only-017000/config.json ...
	I0729 16:47:34.643033    7567 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/download-only-017000/config.json: {Name:mk2e750136eef84cd0c3e61bd45afe4021d8b7f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:47:34.643261    7567 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:47:34.644135    7567 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 16:47:35.008983    7567 out.go:169] 
	W0729 16:47:35.015155    7567 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108665a60 0x108665a60 0x108665a60 0x108665a60 0x108665a60 0x108665a60 0x108665a60] Decompressors:map[bz2:0x1400000eed0 gz:0x1400000eed8 tar:0x1400000ee80 tar.bz2:0x1400000ee90 tar.gz:0x1400000eea0 tar.xz:0x1400000eeb0 tar.zst:0x1400000eec0 tbz2:0x1400000ee90 tgz:0x1400000eea0 txz:0x1400000eeb0 tzst:0x1400000eec0 xz:0x1400000eee0 zip:0x1400000eef0 zst:0x1400000eee8] Getters:map[file:0x14001388560 http:0x140000b4370 https:0x140000b43c0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 16:47:35.015187    7567 out_reason.go:110] 
	W0729 16:47:35.022991    7567 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:47:35.026924    7567 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-017000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (12.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-532000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-532000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.925261041s)

                                                
                                                
-- stdout --
	* [offline-docker-532000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-532000" primary control-plane node in "offline-docker-532000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-532000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:59:57.226824    9045 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:59:57.226962    9045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:59:57.226966    9045 out.go:304] Setting ErrFile to fd 2...
	I0729 16:59:57.226968    9045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:59:57.227085    9045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:59:57.228197    9045 out.go:298] Setting JSON to false
	I0729 16:59:57.245303    9045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5364,"bootTime":1722292233,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:59:57.245381    9045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:59:57.250459    9045 out.go:177] * [offline-docker-532000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:59:57.257473    9045 notify.go:220] Checking for updates...
	I0729 16:59:57.260413    9045 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:59:57.264375    9045 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:59:57.267395    9045 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:59:57.270452    9045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:59:57.274395    9045 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:59:57.277429    9045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:59:57.280796    9045 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:59:57.280864    9045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:59:57.284390    9045 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:59:57.291410    9045 start.go:297] selected driver: qemu2
	I0729 16:59:57.291419    9045 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:59:57.291426    9045 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:59:57.293383    9045 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:59:57.296405    9045 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:59:57.299462    9045 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:59:57.299495    9045 cni.go:84] Creating CNI manager for ""
	I0729 16:59:57.299503    9045 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:59:57.299507    9045 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:59:57.299545    9045 start.go:340] cluster config:
	{Name:offline-docker-532000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:59:57.303119    9045 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:59:57.310379    9045 out.go:177] * Starting "offline-docker-532000" primary control-plane node in "offline-docker-532000" cluster
	I0729 16:59:57.314390    9045 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:59:57.314425    9045 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:59:57.314435    9045 cache.go:56] Caching tarball of preloaded images
	I0729 16:59:57.314505    9045 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:59:57.314510    9045 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:59:57.314576    9045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/offline-docker-532000/config.json ...
	I0729 16:59:57.314586    9045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/offline-docker-532000/config.json: {Name:mk5f958094fc09404848c452dd40b012218e4cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:59:57.314811    9045 start.go:360] acquireMachinesLock for offline-docker-532000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:59:57.314846    9045 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "offline-docker-532000"
	I0729 16:59:57.314857    9045 start.go:93] Provisioning new machine with config: &{Name:offline-docker-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:59:57.314897    9045 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:59:57.323418    9045 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:59:57.339309    9045 start.go:159] libmachine.API.Create for "offline-docker-532000" (driver="qemu2")
	I0729 16:59:57.339347    9045 client.go:168] LocalClient.Create starting
	I0729 16:59:57.339423    9045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 16:59:57.339452    9045 main.go:141] libmachine: Decoding PEM data...
	I0729 16:59:57.339462    9045 main.go:141] libmachine: Parsing certificate...
	I0729 16:59:57.339507    9045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 16:59:57.339530    9045 main.go:141] libmachine: Decoding PEM data...
	I0729 16:59:57.339538    9045 main.go:141] libmachine: Parsing certificate...
	I0729 16:59:57.339920    9045 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:59:57.489586    9045 main.go:141] libmachine: Creating SSH key...
	I0729 16:59:57.612686    9045 main.go:141] libmachine: Creating Disk image...
	I0729 16:59:57.612694    9045 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:59:57.612862    9045 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/disk.qcow2
	I0729 16:59:57.622212    9045 main.go:141] libmachine: STDOUT: 
	I0729 16:59:57.622244    9045 main.go:141] libmachine: STDERR: 
	I0729 16:59:57.622324    9045 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/disk.qcow2 +20000M
	I0729 16:59:57.631047    9045 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:59:57.631076    9045 main.go:141] libmachine: STDERR: 
	I0729 16:59:57.631106    9045 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/disk.qcow2
	I0729 16:59:57.631116    9045 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:59:57.631132    9045 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:59:57.631159    9045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:48:81:7b:ad:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/disk.qcow2
	I0729 16:59:57.632908    9045 main.go:141] libmachine: STDOUT: 
	I0729 16:59:57.632924    9045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:59:57.632942    9045 client.go:171] duration metric: took 293.590167ms to LocalClient.Create
	I0729 16:59:59.635052    9045 start.go:128] duration metric: took 2.320148833s to createHost
	I0729 16:59:59.635083    9045 start.go:83] releasing machines lock for "offline-docker-532000", held for 2.320233709s
	W0729 16:59:59.635114    9045 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:59:59.649248    9045 out.go:177] * Deleting "offline-docker-532000" in qemu2 ...
	W0729 16:59:59.658937    9045 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:59:59.658945    9045 start.go:729] Will try again in 5 seconds ...
	I0729 17:00:04.661142    9045 start.go:360] acquireMachinesLock for offline-docker-532000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:00:04.661445    9045 start.go:364] duration metric: took 226.334µs to acquireMachinesLock for "offline-docker-532000"
	I0729 17:00:04.661569    9045 start.go:93] Provisioning new machine with config: &{Name:offline-docker-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:00:04.661804    9045 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:00:04.674059    9045 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 17:00:04.716636    9045 start.go:159] libmachine.API.Create for "offline-docker-532000" (driver="qemu2")
	I0729 17:00:04.716686    9045 client.go:168] LocalClient.Create starting
	I0729 17:00:04.716802    9045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:00:04.716877    9045 main.go:141] libmachine: Decoding PEM data...
	I0729 17:00:04.716902    9045 main.go:141] libmachine: Parsing certificate...
	I0729 17:00:04.717032    9045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:00:04.717083    9045 main.go:141] libmachine: Decoding PEM data...
	I0729 17:00:04.717106    9045 main.go:141] libmachine: Parsing certificate...
	I0729 17:00:04.718318    9045 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:00:04.880493    9045 main.go:141] libmachine: Creating SSH key...
	I0729 17:00:05.051778    9045 main.go:141] libmachine: Creating Disk image...
	I0729 17:00:05.051786    9045 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:00:05.052000    9045 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/disk.qcow2
	I0729 17:00:05.062431    9045 main.go:141] libmachine: STDOUT: 
	I0729 17:00:05.062454    9045 main.go:141] libmachine: STDERR: 
	I0729 17:00:05.062518    9045 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/disk.qcow2 +20000M
	I0729 17:00:05.071602    9045 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:00:05.071618    9045 main.go:141] libmachine: STDERR: 
	I0729 17:00:05.071632    9045 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/disk.qcow2
	I0729 17:00:05.071636    9045 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:00:05.071646    9045 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:00:05.071682    9045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:6e:8c:a9:73:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/offline-docker-532000/disk.qcow2
	I0729 17:00:05.073161    9045 main.go:141] libmachine: STDOUT: 
	I0729 17:00:05.073183    9045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:00:05.073195    9045 client.go:171] duration metric: took 356.5035ms to LocalClient.Create
	I0729 17:00:07.075362    9045 start.go:128] duration metric: took 2.413538917s to createHost
	I0729 17:00:07.075435    9045 start.go:83] releasing machines lock for "offline-docker-532000", held for 2.41396075s
	W0729 17:00:07.075730    9045 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-532000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-532000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:00:07.086818    9045 out.go:177] 
	W0729 17:00:07.094076    9045 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:00:07.094158    9045 out.go:239] * 
	* 
	W0729 17:00:07.096910    9045 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:00:07.106776    9045 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-532000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-29 17:00:07.122893 -0700 PDT m=+764.429232960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-532000 -n offline-docker-532000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-532000 -n offline-docker-532000: exit status 7 (63.484292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-532000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-532000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-532000
--- FAIL: TestOffline (10.09s)

                                                
                                    
x
+
TestAddons/Setup (10.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-663000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-663000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.148526042s)

                                                
                                                
-- stdout --
	* [addons-663000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-663000" primary control-plane node in "addons-663000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-663000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:48:05.732726    7676 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:48:05.732872    7676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:48:05.732875    7676 out.go:304] Setting ErrFile to fd 2...
	I0729 16:48:05.732878    7676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:48:05.732986    7676 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:48:05.734057    7676 out.go:298] Setting JSON to false
	I0729 16:48:05.749951    7676 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4652,"bootTime":1722292233,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:48:05.750014    7676 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:48:05.753532    7676 out.go:177] * [addons-663000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:48:05.759363    7676 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:48:05.759439    7676 notify.go:220] Checking for updates...
	I0729 16:48:05.767430    7676 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:48:05.771487    7676 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:48:05.774505    7676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:48:05.778416    7676 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:48:05.782491    7676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:48:05.785610    7676 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:48:05.789475    7676 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:48:05.796503    7676 start.go:297] selected driver: qemu2
	I0729 16:48:05.796509    7676 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:48:05.796523    7676 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:48:05.798686    7676 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:48:05.802478    7676 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:48:05.805585    7676 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:48:05.805609    7676 cni.go:84] Creating CNI manager for ""
	I0729 16:48:05.805622    7676 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:48:05.805627    7676 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:48:05.805655    7676 start.go:340] cluster config:
	{Name:addons-663000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:48:05.809614    7676 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:48:05.816490    7676 out.go:177] * Starting "addons-663000" primary control-plane node in "addons-663000" cluster
	I0729 16:48:05.820467    7676 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:48:05.820485    7676 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:48:05.820497    7676 cache.go:56] Caching tarball of preloaded images
	I0729 16:48:05.820562    7676 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:48:05.820568    7676 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:48:05.820793    7676 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/addons-663000/config.json ...
	I0729 16:48:05.820804    7676 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/addons-663000/config.json: {Name:mk5fe8da66d58600489eaf1c720a356e5a4b23c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:48:05.821180    7676 start.go:360] acquireMachinesLock for addons-663000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:48:05.821248    7676 start.go:364] duration metric: took 62.042µs to acquireMachinesLock for "addons-663000"
	I0729 16:48:05.821258    7676 start.go:93] Provisioning new machine with config: &{Name:addons-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:48:05.821291    7676 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:48:05.829437    7676 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 16:48:05.847594    7676 start.go:159] libmachine.API.Create for "addons-663000" (driver="qemu2")
	I0729 16:48:05.847621    7676 client.go:168] LocalClient.Create starting
	I0729 16:48:05.847752    7676 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 16:48:05.887072    7676 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 16:48:06.050533    7676 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:48:06.251330    7676 main.go:141] libmachine: Creating SSH key...
	I0729 16:48:06.369318    7676 main.go:141] libmachine: Creating Disk image...
	I0729 16:48:06.369324    7676 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:48:06.369533    7676 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/disk.qcow2
	I0729 16:48:06.379098    7676 main.go:141] libmachine: STDOUT: 
	I0729 16:48:06.379116    7676 main.go:141] libmachine: STDERR: 
	I0729 16:48:06.379167    7676 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/disk.qcow2 +20000M
	I0729 16:48:06.386936    7676 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:48:06.386950    7676 main.go:141] libmachine: STDERR: 
	I0729 16:48:06.386959    7676 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/disk.qcow2
	I0729 16:48:06.386963    7676 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:48:06.386991    7676 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:48:06.387018    7676 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:02:af:ae:41:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/disk.qcow2
	I0729 16:48:06.388656    7676 main.go:141] libmachine: STDOUT: 
	I0729 16:48:06.388672    7676 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:48:06.388695    7676 client.go:171] duration metric: took 541.0605ms to LocalClient.Create
	I0729 16:48:08.390901    7676 start.go:128] duration metric: took 2.569581375s to createHost
	I0729 16:48:08.390978    7676 start.go:83] releasing machines lock for "addons-663000", held for 2.569720084s
	W0729 16:48:08.391067    7676 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:48:08.398355    7676 out.go:177] * Deleting "addons-663000" in qemu2 ...
	W0729 16:48:08.427693    7676 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:48:08.427723    7676 start.go:729] Will try again in 5 seconds ...
	I0729 16:48:13.429903    7676 start.go:360] acquireMachinesLock for addons-663000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:48:13.430415    7676 start.go:364] duration metric: took 418.625µs to acquireMachinesLock for "addons-663000"
	I0729 16:48:13.430554    7676 start.go:93] Provisioning new machine with config: &{Name:addons-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:48:13.430871    7676 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:48:13.440484    7676 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 16:48:13.491104    7676 start.go:159] libmachine.API.Create for "addons-663000" (driver="qemu2")
	I0729 16:48:13.491147    7676 client.go:168] LocalClient.Create starting
	I0729 16:48:13.491263    7676 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 16:48:13.491326    7676 main.go:141] libmachine: Decoding PEM data...
	I0729 16:48:13.491343    7676 main.go:141] libmachine: Parsing certificate...
	I0729 16:48:13.491406    7676 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 16:48:13.491452    7676 main.go:141] libmachine: Decoding PEM data...
	I0729 16:48:13.491464    7676 main.go:141] libmachine: Parsing certificate...
	I0729 16:48:13.491960    7676 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:48:13.651811    7676 main.go:141] libmachine: Creating SSH key...
	I0729 16:48:13.792981    7676 main.go:141] libmachine: Creating Disk image...
	I0729 16:48:13.792992    7676 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:48:13.793215    7676 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/disk.qcow2
	I0729 16:48:13.802429    7676 main.go:141] libmachine: STDOUT: 
	I0729 16:48:13.802450    7676 main.go:141] libmachine: STDERR: 
	I0729 16:48:13.802508    7676 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/disk.qcow2 +20000M
	I0729 16:48:13.810424    7676 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:48:13.810445    7676 main.go:141] libmachine: STDERR: 
	I0729 16:48:13.810459    7676 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/disk.qcow2
	I0729 16:48:13.810465    7676 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:48:13.810475    7676 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:48:13.810512    7676 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:5f:1e:34:b5:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/addons-663000/disk.qcow2
	I0729 16:48:13.812171    7676 main.go:141] libmachine: STDOUT: 
	I0729 16:48:13.812187    7676 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:48:13.812200    7676 client.go:171] duration metric: took 321.047125ms to LocalClient.Create
	I0729 16:48:15.814407    7676 start.go:128] duration metric: took 2.383510375s to createHost
	I0729 16:48:15.814449    7676 start.go:83] releasing machines lock for "addons-663000", held for 2.383997459s
	W0729 16:48:15.814781    7676 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-663000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-663000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:48:15.824260    7676 out.go:177] 
	W0729 16:48:15.828344    7676 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:48:15.828366    7676 out.go:239] * 
	* 
	W0729 16:48:15.830951    7676 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:48:15.840218    7676 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-663000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.15s)

                                                
                                    
x
+
TestCertOptions (10.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-462000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-462000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.852515s)

                                                
                                                
-- stdout --
	* [cert-options-462000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-462000" primary control-plane node in "cert-options-462000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-462000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-462000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-462000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-462000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-462000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (77.953333ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-462000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-462000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-462000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-462000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-462000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-462000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.74875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-462000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-462000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-462000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-462000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-462000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-29 17:11:57.05432 -0700 PDT m=+1474.361087376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-462000 -n cert-options-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-462000 -n cert-options-462000: exit status 7 (29.979209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-462000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-462000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-462000
--- FAIL: TestCertOptions (10.11s)

                                                
                                    
x
+
TestCertExpiration (199.72s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-411000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-411000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.096892333s)

                                                
                                                
-- stdout --
	* [cert-expiration-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-411000" primary control-plane node in "cert-expiration-411000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-411000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-411000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-411000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-411000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (7.469368542s)

                                                
                                                
-- stdout --
	* [cert-expiration-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-411000" primary control-plane node in "cert-expiration-411000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-411000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-411000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-411000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-411000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-411000" primary control-plane node in "cert-expiration-411000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-411000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-411000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-411000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-29 17:14:51.933458 -0700 PDT m=+1649.240330585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-411000 -n cert-expiration-411000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-411000 -n cert-expiration-411000: exit status 7 (48.832916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-411000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-411000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-411000
--- FAIL: TestCertExpiration (199.72s)

                                                
                                    
x
+
TestDockerFlags (12.31s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-009000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-009000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.073955875s)

                                                
                                                
-- stdout --
	* [docker-flags-009000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-009000" primary control-plane node in "docker-flags-009000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-009000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:11:34.765148   10004 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:11:34.765281   10004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:11:34.765284   10004 out.go:304] Setting ErrFile to fd 2...
	I0729 17:11:34.765286   10004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:11:34.765441   10004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:11:34.766503   10004 out.go:298] Setting JSON to false
	I0729 17:11:34.783057   10004 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6061,"bootTime":1722292233,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:11:34.783136   10004 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:11:34.797899   10004 out.go:177] * [docker-flags-009000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:11:34.806860   10004 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:11:34.806923   10004 notify.go:220] Checking for updates...
	I0729 17:11:34.813820   10004 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:11:34.816853   10004 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:11:34.818381   10004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:11:34.821790   10004 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:11:34.824821   10004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:11:34.828110   10004 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:11:34.828181   10004 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:11:34.828229   10004 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:11:34.831721   10004 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:11:34.838790   10004 start.go:297] selected driver: qemu2
	I0729 17:11:34.838796   10004 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:11:34.838802   10004 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:11:34.840867   10004 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:11:34.844798   10004 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:11:34.846268   10004 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0729 17:11:34.846298   10004 cni.go:84] Creating CNI manager for ""
	I0729 17:11:34.846303   10004 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:11:34.846307   10004 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:11:34.846330   10004 start.go:340] cluster config:
	{Name:docker-flags-009000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:11:34.849620   10004 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:11:34.858663   10004 out.go:177] * Starting "docker-flags-009000" primary control-plane node in "docker-flags-009000" cluster
	I0729 17:11:34.862781   10004 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:11:34.862793   10004 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:11:34.862801   10004 cache.go:56] Caching tarball of preloaded images
	I0729 17:11:34.862847   10004 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:11:34.862852   10004 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:11:34.862902   10004 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/docker-flags-009000/config.json ...
	I0729 17:11:34.862912   10004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/docker-flags-009000/config.json: {Name:mk4eaf5407dcedf703bf1a3b54fa3c2245eddf0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:11:34.863198   10004 start.go:360] acquireMachinesLock for docker-flags-009000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:11:36.927769   10004 start.go:364] duration metric: took 2.064543958s to acquireMachinesLock for "docker-flags-009000"
	I0729 17:11:36.927935   10004 start.go:93] Provisioning new machine with config: &{Name:docker-flags-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:11:36.928259   10004 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:11:36.933050   10004 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 17:11:36.981106   10004 start.go:159] libmachine.API.Create for "docker-flags-009000" (driver="qemu2")
	I0729 17:11:36.981152   10004 client.go:168] LocalClient.Create starting
	I0729 17:11:36.981289   10004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:11:36.981346   10004 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:36.981366   10004 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:36.981432   10004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:11:36.981476   10004 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:36.981488   10004 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:36.982094   10004 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:11:37.174448   10004 main.go:141] libmachine: Creating SSH key...
	I0729 17:11:37.267616   10004 main.go:141] libmachine: Creating Disk image...
	I0729 17:11:37.267627   10004 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:11:37.267789   10004 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/disk.qcow2
	I0729 17:11:37.276899   10004 main.go:141] libmachine: STDOUT: 
	I0729 17:11:37.276917   10004 main.go:141] libmachine: STDERR: 
	I0729 17:11:37.276966   10004 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/disk.qcow2 +20000M
	I0729 17:11:37.284988   10004 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:11:37.285000   10004 main.go:141] libmachine: STDERR: 
	I0729 17:11:37.285018   10004 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/disk.qcow2
	I0729 17:11:37.285022   10004 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:11:37.285036   10004 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:11:37.285060   10004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:26:35:3f:5b:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/disk.qcow2
	I0729 17:11:37.286621   10004 main.go:141] libmachine: STDOUT: 
	I0729 17:11:37.286635   10004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:11:37.286652   10004 client.go:171] duration metric: took 305.4925ms to LocalClient.Create
	I0729 17:11:39.288820   10004 start.go:128] duration metric: took 2.360534625s to createHost
	I0729 17:11:39.288878   10004 start.go:83] releasing machines lock for "docker-flags-009000", held for 2.3610405s
	W0729 17:11:39.288995   10004 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:11:39.300284   10004 out.go:177] * Deleting "docker-flags-009000" in qemu2 ...
	W0729 17:11:39.329163   10004 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:11:39.329195   10004 start.go:729] Will try again in 5 seconds ...
	I0729 17:11:44.331362   10004 start.go:360] acquireMachinesLock for docker-flags-009000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:11:44.419288   10004 start.go:364] duration metric: took 87.756417ms to acquireMachinesLock for "docker-flags-009000"
	I0729 17:11:44.419433   10004 start.go:93] Provisioning new machine with config: &{Name:docker-flags-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:11:44.419680   10004 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:11:44.430933   10004 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 17:11:44.479577   10004 start.go:159] libmachine.API.Create for "docker-flags-009000" (driver="qemu2")
	I0729 17:11:44.479763   10004 client.go:168] LocalClient.Create starting
	I0729 17:11:44.479855   10004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:11:44.479915   10004 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:44.479932   10004 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:44.479993   10004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:11:44.480022   10004 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:44.480032   10004 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:44.480568   10004 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:11:44.641044   10004 main.go:141] libmachine: Creating SSH key...
	I0729 17:11:44.743815   10004 main.go:141] libmachine: Creating Disk image...
	I0729 17:11:44.743823   10004 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:11:44.744082   10004 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/disk.qcow2
	I0729 17:11:44.753112   10004 main.go:141] libmachine: STDOUT: 
	I0729 17:11:44.753128   10004 main.go:141] libmachine: STDERR: 
	I0729 17:11:44.753176   10004 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/disk.qcow2 +20000M
	I0729 17:11:44.761098   10004 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:11:44.761113   10004 main.go:141] libmachine: STDERR: 
	I0729 17:11:44.761125   10004 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/disk.qcow2
	I0729 17:11:44.761129   10004 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:11:44.761150   10004 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:11:44.761188   10004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:ac:23:dd:d0:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/docker-flags-009000/disk.qcow2
	I0729 17:11:44.762839   10004 main.go:141] libmachine: STDOUT: 
	I0729 17:11:44.762853   10004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:11:44.762864   10004 client.go:171] duration metric: took 283.095958ms to LocalClient.Create
	I0729 17:11:46.765168   10004 start.go:128] duration metric: took 2.345434667s to createHost
	I0729 17:11:46.765258   10004 start.go:83] releasing machines lock for "docker-flags-009000", held for 2.3459185s
	W0729 17:11:46.765590   10004 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:11:46.775149   10004 out.go:177] 
	W0729 17:11:46.783212   10004 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:11:46.783248   10004 out.go:239] * 
	* 
	W0729 17:11:46.785759   10004 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:11:46.796134   10004 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-009000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-009000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-009000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (84.411958ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-009000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-009000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-009000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-009000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-009000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-009000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-009000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-009000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-009000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.495125ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-009000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-009000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-009000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-009000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-009000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-009000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-29 17:11:46.940281 -0700 PDT m=+1464.247041793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-009000 -n docker-flags-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-009000 -n docker-flags-009000: exit status 7 (29.029708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-009000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-009000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-009000
--- FAIL: TestDockerFlags (12.31s)

                                                
                                    
x
+
TestForceSystemdFlag (10.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-835000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-835000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.934097583s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-835000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-835000" primary control-plane node in "force-systemd-flag-835000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-835000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:11:10.938852    9871 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:11:10.938985    9871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:11:10.938988    9871 out.go:304] Setting ErrFile to fd 2...
	I0729 17:11:10.938990    9871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:11:10.939126    9871 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:11:10.940186    9871 out.go:298] Setting JSON to false
	I0729 17:11:10.956152    9871 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6037,"bootTime":1722292233,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:11:10.956259    9871 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:11:10.960773    9871 out.go:177] * [force-systemd-flag-835000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:11:10.967753    9871 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:11:10.967815    9871 notify.go:220] Checking for updates...
	I0729 17:11:10.975647    9871 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:11:10.978699    9871 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:11:10.981699    9871 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:11:10.984711    9871 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:11:10.985983    9871 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:11:10.989090    9871 config.go:182] Loaded profile config "NoKubernetes-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0729 17:11:10.989158    9871 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:11:10.989210    9871 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:11:10.993694    9871 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:11:10.998673    9871 start.go:297] selected driver: qemu2
	I0729 17:11:10.998678    9871 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:11:10.998684    9871 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:11:11.000885    9871 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:11:11.002876    9871 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:11:11.006774    9871 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 17:11:11.006808    9871 cni.go:84] Creating CNI manager for ""
	I0729 17:11:11.006817    9871 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:11:11.006822    9871 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:11:11.006856    9871 start.go:340] cluster config:
	{Name:force-systemd-flag-835000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:11:11.010691    9871 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:11:11.018685    9871 out.go:177] * Starting "force-systemd-flag-835000" primary control-plane node in "force-systemd-flag-835000" cluster
	I0729 17:11:11.022663    9871 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:11:11.022709    9871 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:11:11.022724    9871 cache.go:56] Caching tarball of preloaded images
	I0729 17:11:11.022827    9871 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:11:11.022836    9871 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:11:11.022896    9871 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/force-systemd-flag-835000/config.json ...
	I0729 17:11:11.022911    9871 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/force-systemd-flag-835000/config.json: {Name:mk3afd11647f3485de8d2731ef3c1100eaa9012f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:11:11.023261    9871 start.go:360] acquireMachinesLock for force-systemd-flag-835000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:11:11.023294    9871 start.go:364] duration metric: took 26.083µs to acquireMachinesLock for "force-systemd-flag-835000"
	I0729 17:11:11.023304    9871 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:11:11.023341    9871 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:11:11.031836    9871 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 17:11:11.047713    9871 start.go:159] libmachine.API.Create for "force-systemd-flag-835000" (driver="qemu2")
	I0729 17:11:11.047751    9871 client.go:168] LocalClient.Create starting
	I0729 17:11:11.047807    9871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:11:11.047837    9871 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:11.047848    9871 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:11.047896    9871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:11:11.047918    9871 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:11.047929    9871 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:11.050116    9871 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:11:11.224827    9871 main.go:141] libmachine: Creating SSH key...
	I0729 17:11:11.388222    9871 main.go:141] libmachine: Creating Disk image...
	I0729 17:11:11.388228    9871 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:11:11.388451    9871 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/disk.qcow2
	I0729 17:11:11.397839    9871 main.go:141] libmachine: STDOUT: 
	I0729 17:11:11.397858    9871 main.go:141] libmachine: STDERR: 
	I0729 17:11:11.397918    9871 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/disk.qcow2 +20000M
	I0729 17:11:11.405705    9871 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:11:11.405725    9871 main.go:141] libmachine: STDERR: 
	I0729 17:11:11.405736    9871 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/disk.qcow2
	I0729 17:11:11.405742    9871 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:11:11.405754    9871 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:11:11.405781    9871 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:3a:6a:9d:c4:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/disk.qcow2
	I0729 17:11:11.407332    9871 main.go:141] libmachine: STDOUT: 
	I0729 17:11:11.407347    9871 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:11:11.407370    9871 client.go:171] duration metric: took 359.614417ms to LocalClient.Create
	I0729 17:11:13.409613    9871 start.go:128] duration metric: took 2.386250167s to createHost
	I0729 17:11:13.409698    9871 start.go:83] releasing machines lock for "force-systemd-flag-835000", held for 2.386395084s
	W0729 17:11:13.409842    9871 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:11:13.425026    9871 out.go:177] * Deleting "force-systemd-flag-835000" in qemu2 ...
	W0729 17:11:13.451631    9871 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:11:13.451662    9871 start.go:729] Will try again in 5 seconds ...
	I0729 17:11:18.453924    9871 start.go:360] acquireMachinesLock for force-systemd-flag-835000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:11:18.454320    9871 start.go:364] duration metric: took 299.375µs to acquireMachinesLock for "force-systemd-flag-835000"
	I0729 17:11:18.454441    9871 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:11:18.454746    9871 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:11:18.461115    9871 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 17:11:18.511000    9871 start.go:159] libmachine.API.Create for "force-systemd-flag-835000" (driver="qemu2")
	I0729 17:11:18.511050    9871 client.go:168] LocalClient.Create starting
	I0729 17:11:18.511160    9871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:11:18.511220    9871 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:18.511244    9871 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:18.511313    9871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:11:18.511357    9871 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:18.511369    9871 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:18.511907    9871 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:11:18.671052    9871 main.go:141] libmachine: Creating SSH key...
	I0729 17:11:18.768176    9871 main.go:141] libmachine: Creating Disk image...
	I0729 17:11:18.768183    9871 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:11:18.768397    9871 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/disk.qcow2
	I0729 17:11:18.777567    9871 main.go:141] libmachine: STDOUT: 
	I0729 17:11:18.777586    9871 main.go:141] libmachine: STDERR: 
	I0729 17:11:18.777651    9871 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/disk.qcow2 +20000M
	I0729 17:11:18.785417    9871 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:11:18.785432    9871 main.go:141] libmachine: STDERR: 
	I0729 17:11:18.785445    9871 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/disk.qcow2
	I0729 17:11:18.785450    9871 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:11:18.785462    9871 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:11:18.785502    9871 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:60:0d:ab:63:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-flag-835000/disk.qcow2
	I0729 17:11:18.787098    9871 main.go:141] libmachine: STDOUT: 
	I0729 17:11:18.787115    9871 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:11:18.787129    9871 client.go:171] duration metric: took 276.07175ms to LocalClient.Create
	I0729 17:11:20.789317    9871 start.go:128] duration metric: took 2.334537708s to createHost
	I0729 17:11:20.789424    9871 start.go:83] releasing machines lock for "force-systemd-flag-835000", held for 2.335077875s
	W0729 17:11:20.789745    9871 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-835000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-835000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:11:20.805458    9871 out.go:177] 
	W0729 17:11:20.813470    9871 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:11:20.813494    9871 out.go:239] * 
	* 
	W0729 17:11:20.816132    9871 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:11:20.824367    9871 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-835000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-835000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-835000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (85.47525ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-835000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-835000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-835000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-29 17:11:20.932877 -0700 PDT m=+1438.239622793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-835000 -n force-systemd-flag-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-835000 -n force-systemd-flag-835000: exit status 7 (38.557458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-835000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-835000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-835000
--- FAIL: TestForceSystemdFlag (10.16s)

                                                
                                    
x
+
TestForceSystemdEnv (10.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-813000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-813000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.95294925s)

                                                
                                                
-- stdout --
	* [force-systemd-env-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-813000" primary control-plane node in "force-systemd-env-813000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-813000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:11:24.598926    9949 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:11:24.599103    9949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:11:24.599106    9949 out.go:304] Setting ErrFile to fd 2...
	I0729 17:11:24.599108    9949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:11:24.599227    9949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:11:24.600287    9949 out.go:298] Setting JSON to false
	I0729 17:11:24.616417    9949 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6051,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:11:24.616494    9949 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:11:24.623461    9949 out.go:177] * [force-systemd-env-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:11:24.626403    9949 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:11:24.626509    9949 notify.go:220] Checking for updates...
	I0729 17:11:24.638458    9949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:11:24.642405    9949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:11:24.646400    9949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:11:24.649440    9949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:11:24.652384    9949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0729 17:11:24.655767    9949 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:11:24.655823    9949 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:11:24.660431    9949 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:11:24.667379    9949 start.go:297] selected driver: qemu2
	I0729 17:11:24.667386    9949 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:11:24.667393    9949 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:11:24.669650    9949 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:11:24.672434    9949 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:11:24.675505    9949 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 17:11:24.675528    9949 cni.go:84] Creating CNI manager for ""
	I0729 17:11:24.675535    9949 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:11:24.675541    9949 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:11:24.675574    9949 start.go:340] cluster config:
	{Name:force-systemd-env-813000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:11:24.679272    9949 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:11:24.686327    9949 out.go:177] * Starting "force-systemd-env-813000" primary control-plane node in "force-systemd-env-813000" cluster
	I0729 17:11:24.690361    9949 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:11:24.690380    9949 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:11:24.690393    9949 cache.go:56] Caching tarball of preloaded images
	I0729 17:11:24.690463    9949 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:11:24.690468    9949 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:11:24.690537    9949 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/force-systemd-env-813000/config.json ...
	I0729 17:11:24.690554    9949 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/force-systemd-env-813000/config.json: {Name:mk81806ee14f85d2a3c39b2901de7245c43f29fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:11:24.690783    9949 start.go:360] acquireMachinesLock for force-systemd-env-813000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:11:24.690822    9949 start.go:364] duration metric: took 30.833µs to acquireMachinesLock for "force-systemd-env-813000"
	I0729 17:11:24.690834    9949 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:11:24.690865    9949 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:11:24.698401    9949 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 17:11:24.716518    9949 start.go:159] libmachine.API.Create for "force-systemd-env-813000" (driver="qemu2")
	I0729 17:11:24.716548    9949 client.go:168] LocalClient.Create starting
	I0729 17:11:24.716627    9949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:11:24.716660    9949 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:24.716670    9949 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:24.716715    9949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:11:24.716743    9949 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:24.716751    9949 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:24.717112    9949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:11:24.867375    9949 main.go:141] libmachine: Creating SSH key...
	I0729 17:11:25.021039    9949 main.go:141] libmachine: Creating Disk image...
	I0729 17:11:25.021047    9949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:11:25.021241    9949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/disk.qcow2
	I0729 17:11:25.030673    9949 main.go:141] libmachine: STDOUT: 
	I0729 17:11:25.030696    9949 main.go:141] libmachine: STDERR: 
	I0729 17:11:25.030751    9949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/disk.qcow2 +20000M
	I0729 17:11:25.038877    9949 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:11:25.038891    9949 main.go:141] libmachine: STDERR: 
	I0729 17:11:25.038907    9949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/disk.qcow2
	I0729 17:11:25.038912    9949 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:11:25.038924    9949 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:11:25.038959    9949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:e5:5a:e2:43:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/disk.qcow2
	I0729 17:11:25.040537    9949 main.go:141] libmachine: STDOUT: 
	I0729 17:11:25.040552    9949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:11:25.040569    9949 client.go:171] duration metric: took 324.01475ms to LocalClient.Create
	I0729 17:11:27.042746    9949 start.go:128] duration metric: took 2.351859125s to createHost
	I0729 17:11:27.042791    9949 start.go:83] releasing machines lock for "force-systemd-env-813000", held for 2.351959834s
	W0729 17:11:27.042884    9949 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:11:27.050005    9949 out.go:177] * Deleting "force-systemd-env-813000" in qemu2 ...
	W0729 17:11:27.083076    9949 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:11:27.083108    9949 start.go:729] Will try again in 5 seconds ...
	I0729 17:11:32.083471    9949 start.go:360] acquireMachinesLock for force-systemd-env-813000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:11:32.083943    9949 start.go:364] duration metric: took 380.625µs to acquireMachinesLock for "force-systemd-env-813000"
	I0729 17:11:32.084093    9949 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:11:32.084360    9949 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:11:32.090021    9949 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 17:11:32.141471    9949 start.go:159] libmachine.API.Create for "force-systemd-env-813000" (driver="qemu2")
	I0729 17:11:32.141524    9949 client.go:168] LocalClient.Create starting
	I0729 17:11:32.141650    9949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:11:32.141708    9949 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:32.141726    9949 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:32.141805    9949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:11:32.141856    9949 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:32.141868    9949 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:32.142361    9949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:11:32.336551    9949 main.go:141] libmachine: Creating SSH key...
	I0729 17:11:32.458999    9949 main.go:141] libmachine: Creating Disk image...
	I0729 17:11:32.459008    9949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:11:32.459166    9949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/disk.qcow2
	I0729 17:11:32.470170    9949 main.go:141] libmachine: STDOUT: 
	I0729 17:11:32.470193    9949 main.go:141] libmachine: STDERR: 
	I0729 17:11:32.470262    9949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/disk.qcow2 +20000M
	I0729 17:11:32.479814    9949 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:11:32.479832    9949 main.go:141] libmachine: STDERR: 
	I0729 17:11:32.479845    9949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/disk.qcow2
	I0729 17:11:32.479851    9949 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:11:32.479864    9949 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:11:32.479897    9949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:cb:f2:6f:57:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/force-systemd-env-813000/disk.qcow2
	I0729 17:11:32.481517    9949 main.go:141] libmachine: STDOUT: 
	I0729 17:11:32.481536    9949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:11:32.481549    9949 client.go:171] duration metric: took 340.018417ms to LocalClient.Create
	I0729 17:11:34.483749    9949 start.go:128] duration metric: took 2.399352625s to createHost
	I0729 17:11:34.483810    9949 start.go:83] releasing machines lock for "force-systemd-env-813000", held for 2.39984175s
	W0729 17:11:34.484184    9949 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-813000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-813000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:11:34.496865    9949 out.go:177] 
	W0729 17:11:34.501884    9949 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:11:34.501920    9949 out.go:239] * 
	* 
	W0729 17:11:34.504874    9949 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:11:34.513792    9949 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-813000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-813000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-813000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (69.752042ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-813000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-813000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-813000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-29 17:11:34.595024 -0700 PDT m=+1451.901777335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-813000 -n force-systemd-env-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-813000 -n force-systemd-env-813000: exit status 7 (35.052209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-813000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-813000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-813000
--- FAIL: TestForceSystemdEnv (10.16s)

                                                
                                    
x
+
TestErrorSpam/setup (9.85s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-030000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-030000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 --driver=qemu2 : exit status 80 (9.8493745s)

                                                
                                                
-- stdout --
	* [nospam-030000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-030000" primary control-plane node in "nospam-030000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-030000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-030000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-030000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-030000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-030000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19346
- KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-030000" primary control-plane node in "nospam-030000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-030000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-030000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.85s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-905000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-905000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.938066292s)

                                                
                                                
-- stdout --
	* [functional-905000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-905000" primary control-plane node in "functional-905000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-905000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51085 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51085 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51085 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-905000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-905000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-905000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19346
- KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-905000" primary control-plane node in "functional-905000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-905000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51085 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51085 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51085 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-905000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (68.174083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.01s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-905000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-905000 --alsologtostderr -v=8: exit status 80 (5.183072042s)

                                                
                                                
-- stdout --
	* [functional-905000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-905000" primary control-plane node in "functional-905000" cluster
	* Restarting existing qemu2 VM for "functional-905000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-905000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:48:46.381967    7819 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:48:46.382106    7819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:48:46.382109    7819 out.go:304] Setting ErrFile to fd 2...
	I0729 16:48:46.382112    7819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:48:46.382240    7819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:48:46.383261    7819 out.go:298] Setting JSON to false
	I0729 16:48:46.399325    7819 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4693,"bootTime":1722292233,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:48:46.399395    7819 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:48:46.403255    7819 out.go:177] * [functional-905000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:48:46.410258    7819 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:48:46.410311    7819 notify.go:220] Checking for updates...
	I0729 16:48:46.417181    7819 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:48:46.421201    7819 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:48:46.425165    7819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:48:46.428224    7819 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:48:46.431257    7819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:48:46.434484    7819 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:48:46.434535    7819 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:48:46.439208    7819 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:48:46.446220    7819 start.go:297] selected driver: qemu2
	I0729 16:48:46.446228    7819 start.go:901] validating driver "qemu2" against &{Name:functional-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-905000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:48:46.446300    7819 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:48:46.448607    7819 cni.go:84] Creating CNI manager for ""
	I0729 16:48:46.448626    7819 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:48:46.448673    7819 start.go:340] cluster config:
	{Name:functional-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-905000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:48:46.452181    7819 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:48:46.458220    7819 out.go:177] * Starting "functional-905000" primary control-plane node in "functional-905000" cluster
	I0729 16:48:46.462230    7819 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:48:46.462245    7819 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:48:46.462257    7819 cache.go:56] Caching tarball of preloaded images
	I0729 16:48:46.462317    7819 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:48:46.462323    7819 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:48:46.462381    7819 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/functional-905000/config.json ...
	I0729 16:48:46.462851    7819 start.go:360] acquireMachinesLock for functional-905000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:48:46.462881    7819 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "functional-905000"
	I0729 16:48:46.462890    7819 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:48:46.462897    7819 fix.go:54] fixHost starting: 
	I0729 16:48:46.463009    7819 fix.go:112] recreateIfNeeded on functional-905000: state=Stopped err=<nil>
	W0729 16:48:46.463018    7819 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:48:46.470196    7819 out.go:177] * Restarting existing qemu2 VM for "functional-905000" ...
	I0729 16:48:46.474195    7819 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:48:46.474239    7819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:9a:b6:15:1b:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/disk.qcow2
	I0729 16:48:46.476158    7819 main.go:141] libmachine: STDOUT: 
	I0729 16:48:46.476176    7819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:48:46.476203    7819 fix.go:56] duration metric: took 13.3075ms for fixHost
	I0729 16:48:46.476208    7819 start.go:83] releasing machines lock for "functional-905000", held for 13.32275ms
	W0729 16:48:46.476215    7819 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:48:46.476249    7819 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:48:46.476253    7819 start.go:729] Will try again in 5 seconds ...
	I0729 16:48:51.478363    7819 start.go:360] acquireMachinesLock for functional-905000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:48:51.478814    7819 start.go:364] duration metric: took 346.791µs to acquireMachinesLock for "functional-905000"
	I0729 16:48:51.478939    7819 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:48:51.478956    7819 fix.go:54] fixHost starting: 
	I0729 16:48:51.479631    7819 fix.go:112] recreateIfNeeded on functional-905000: state=Stopped err=<nil>
	W0729 16:48:51.479656    7819 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:48:51.487060    7819 out.go:177] * Restarting existing qemu2 VM for "functional-905000" ...
	I0729 16:48:51.489933    7819 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:48:51.490161    7819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:9a:b6:15:1b:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/disk.qcow2
	I0729 16:48:51.498806    7819 main.go:141] libmachine: STDOUT: 
	I0729 16:48:51.498876    7819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:48:51.498963    7819 fix.go:56] duration metric: took 20.003417ms for fixHost
	I0729 16:48:51.499246    7819 start.go:83] releasing machines lock for "functional-905000", held for 20.402084ms
	W0729 16:48:51.499469    7819 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-905000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-905000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:48:51.507103    7819 out.go:177] 
	W0729 16:48:51.511071    7819 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:48:51.511097    7819 out.go:239] * 
	* 
	W0729 16:48:51.513719    7819 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:48:51.522055    7819 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-905000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.184923333s for "functional-905000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (66.638958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.94ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-905000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (29.937709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-905000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-905000 get po -A: exit status 1 (26.32375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-905000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-905000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-905000\n"*: args "kubectl --context functional-905000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-905000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (30.63925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh sudo crictl images: exit status 83 (46.735834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-905000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (39.96625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-905000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.752459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.897292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-905000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 kubectl -- --context functional-905000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 kubectl -- --context functional-905000 get pods: exit status 1 (711.89725ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-905000
	* no server found for cluster "functional-905000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-905000 kubectl -- --context functional-905000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (32.311166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-905000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-905000 get pods: exit status 1 (946.547458ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-905000
	* no server found for cluster "functional-905000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-905000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (28.2825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.98s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-905000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-905000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.196622167s)

                                                
                                                
-- stdout --
	* [functional-905000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-905000" primary control-plane node in "functional-905000" cluster
	* Restarting existing qemu2 VM for "functional-905000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-905000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-905000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-905000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.197138917s for "functional-905000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (68.197125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-905000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-905000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.928625ms)

                                                
                                                
** stderr ** 
	error: context "functional-905000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-905000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (30.074833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 logs: exit status 83 (75.561166ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
	|         | -p download-only-017000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
	| delete  | -p download-only-017000                                                  | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
	| start   | -o=json --download-only                                                  | download-only-107000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
	|         | -p download-only-107000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
	| delete  | -p download-only-107000                                                  | download-only-107000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
	| start   | -o=json --download-only                                                  | download-only-330000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
	|         | -p download-only-330000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	| delete  | -p download-only-330000                                                  | download-only-330000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	| delete  | -p download-only-017000                                                  | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	| delete  | -p download-only-107000                                                  | download-only-107000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	| delete  | -p download-only-330000                                                  | download-only-330000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	| start   | --download-only -p                                                       | binary-mirror-394000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | binary-mirror-394000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51052                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-394000                                                  | binary-mirror-394000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	| addons  | enable dashboard -p                                                      | addons-663000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | addons-663000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-663000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | addons-663000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-663000 --wait=true                                             | addons-663000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-663000                                                         | addons-663000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	| start   | -p nospam-030000 -n=1 --memory=2250 --wait=false                         | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-030000                                                         | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	| start   | -p functional-905000                                                     | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-905000                                                     | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-905000 cache add                                              | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-905000 cache add                                              | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-905000 cache add                                              | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-905000 cache add                                              | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	|         | minikube-local-cache-test:functional-905000                              |                      |         |         |                     |                     |
	| cache   | functional-905000 cache delete                                           | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	|         | minikube-local-cache-test:functional-905000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	| ssh     | functional-905000 ssh sudo                                               | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-905000                                                        | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-905000 ssh                                                    | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-905000 cache reload                                           | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	| ssh     | functional-905000 ssh                                                    | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-905000 kubectl --                                             | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | --context functional-905000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-905000                                                     | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:48:56
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:48:56.636378    7894 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:48:56.636516    7894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:48:56.636518    7894 out.go:304] Setting ErrFile to fd 2...
	I0729 16:48:56.636519    7894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:48:56.636622    7894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:48:56.637648    7894 out.go:298] Setting JSON to false
	I0729 16:48:56.653472    7894 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4703,"bootTime":1722292233,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:48:56.653637    7894 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:48:56.658557    7894 out.go:177] * [functional-905000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:48:56.666512    7894 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:48:56.666564    7894 notify.go:220] Checking for updates...
	I0729 16:48:56.673502    7894 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:48:56.677583    7894 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:48:56.680597    7894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:48:56.689550    7894 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:48:56.698504    7894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:48:56.701910    7894 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:48:56.701962    7894 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:48:56.706578    7894 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:48:56.713601    7894 start.go:297] selected driver: qemu2
	I0729 16:48:56.713606    7894 start.go:901] validating driver "qemu2" against &{Name:functional-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-905000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:48:56.713657    7894 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:48:56.716010    7894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:48:56.716034    7894 cni.go:84] Creating CNI manager for ""
	I0729 16:48:56.716044    7894 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:48:56.716089    7894 start.go:340] cluster config:
	{Name:functional-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-905000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:48:56.719928    7894 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:48:56.724676    7894 out.go:177] * Starting "functional-905000" primary control-plane node in "functional-905000" cluster
	I0729 16:48:56.731537    7894 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:48:56.731560    7894 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:48:56.731573    7894 cache.go:56] Caching tarball of preloaded images
	I0729 16:48:56.731652    7894 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:48:56.731657    7894 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:48:56.731722    7894 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/functional-905000/config.json ...
	I0729 16:48:56.732251    7894 start.go:360] acquireMachinesLock for functional-905000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:48:56.732288    7894 start.go:364] duration metric: took 32.167µs to acquireMachinesLock for "functional-905000"
	I0729 16:48:56.732296    7894 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:48:56.732301    7894 fix.go:54] fixHost starting: 
	I0729 16:48:56.732429    7894 fix.go:112] recreateIfNeeded on functional-905000: state=Stopped err=<nil>
	W0729 16:48:56.732435    7894 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:48:56.740584    7894 out.go:177] * Restarting existing qemu2 VM for "functional-905000" ...
	I0729 16:48:56.744528    7894 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:48:56.744566    7894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:9a:b6:15:1b:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/disk.qcow2
	I0729 16:48:56.746929    7894 main.go:141] libmachine: STDOUT: 
	I0729 16:48:56.746947    7894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:48:56.746977    7894 fix.go:56] duration metric: took 14.6785ms for fixHost
	I0729 16:48:56.746980    7894 start.go:83] releasing machines lock for "functional-905000", held for 14.688417ms
	W0729 16:48:56.746989    7894 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:48:56.747029    7894 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:48:56.747034    7894 start.go:729] Will try again in 5 seconds ...
	I0729 16:49:01.749249    7894 start.go:360] acquireMachinesLock for functional-905000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:49:01.749699    7894 start.go:364] duration metric: took 349.333µs to acquireMachinesLock for "functional-905000"
	I0729 16:49:01.749827    7894 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:49:01.749843    7894 fix.go:54] fixHost starting: 
	I0729 16:49:01.750560    7894 fix.go:112] recreateIfNeeded on functional-905000: state=Stopped err=<nil>
	W0729 16:49:01.750583    7894 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:49:01.755893    7894 out.go:177] * Restarting existing qemu2 VM for "functional-905000" ...
	I0729 16:49:01.763967    7894 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:49:01.764199    7894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:9a:b6:15:1b:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/disk.qcow2
	I0729 16:49:01.774091    7894 main.go:141] libmachine: STDOUT: 
	I0729 16:49:01.774141    7894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:49:01.774220    7894 fix.go:56] duration metric: took 24.378167ms for fixHost
	I0729 16:49:01.774231    7894 start.go:83] releasing machines lock for "functional-905000", held for 24.516416ms
	W0729 16:49:01.774425    7894 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-905000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:49:01.780943    7894 out.go:177] 
	W0729 16:49:01.783945    7894 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:49:01.783970    7894 out.go:239] * 
	W0729 16:49:01.785713    7894 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:49:01.793920    7894 out.go:177] 
	
	
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-905000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
|         | -p download-only-017000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
| delete  | -p download-only-017000                                                  | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
| start   | -o=json --download-only                                                  | download-only-107000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
|         | -p download-only-107000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
| delete  | -p download-only-107000                                                  | download-only-107000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
| start   | -o=json --download-only                                                  | download-only-330000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
|         | -p download-only-330000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| delete  | -p download-only-330000                                                  | download-only-330000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| delete  | -p download-only-017000                                                  | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| delete  | -p download-only-107000                                                  | download-only-107000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| delete  | -p download-only-330000                                                  | download-only-330000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| start   | --download-only -p                                                       | binary-mirror-394000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | binary-mirror-394000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51052                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-394000                                                  | binary-mirror-394000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| addons  | enable dashboard -p                                                      | addons-663000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | addons-663000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-663000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | addons-663000                                                            |                      |         |         |                     |                     |
| start   | -p addons-663000 --wait=true                                             | addons-663000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-663000                                                         | addons-663000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| start   | -p nospam-030000 -n=1 --memory=2250 --wait=false                         | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-030000                                                         | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| start   | -p functional-905000                                                     | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-905000                                                     | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-905000 cache add                                              | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-905000 cache add                                              | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-905000 cache add                                              | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-905000 cache add                                              | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | minikube-local-cache-test:functional-905000                              |                      |         |         |                     |                     |
| cache   | functional-905000 cache delete                                           | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | minikube-local-cache-test:functional-905000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| ssh     | functional-905000 ssh sudo                                               | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-905000                                                        | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-905000 ssh                                                    | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-905000 cache reload                                           | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| ssh     | functional-905000 ssh                                                    | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-905000 kubectl --                                             | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | --context functional-905000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-905000                                                     | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/29 16:48:56
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0729 16:48:56.636378    7894 out.go:291] Setting OutFile to fd 1 ...
I0729 16:48:56.636516    7894 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:48:56.636518    7894 out.go:304] Setting ErrFile to fd 2...
I0729 16:48:56.636519    7894 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:48:56.636622    7894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
I0729 16:48:56.637648    7894 out.go:298] Setting JSON to false
I0729 16:48:56.653472    7894 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4703,"bootTime":1722292233,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0729 16:48:56.653637    7894 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0729 16:48:56.658557    7894 out.go:177] * [functional-905000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0729 16:48:56.666512    7894 out.go:177]   - MINIKUBE_LOCATION=19346
I0729 16:48:56.666564    7894 notify.go:220] Checking for updates...
I0729 16:48:56.673502    7894 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
I0729 16:48:56.677583    7894 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0729 16:48:56.680597    7894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0729 16:48:56.689550    7894 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
I0729 16:48:56.698504    7894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0729 16:48:56.701910    7894 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:48:56.701962    7894 driver.go:392] Setting default libvirt URI to qemu:///system
I0729 16:48:56.706578    7894 out.go:177] * Using the qemu2 driver based on existing profile
I0729 16:48:56.713601    7894 start.go:297] selected driver: qemu2
I0729 16:48:56.713606    7894 start.go:901] validating driver "qemu2" against &{Name:functional-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-905000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 16:48:56.713657    7894 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0729 16:48:56.716010    7894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0729 16:48:56.716034    7894 cni.go:84] Creating CNI manager for ""
I0729 16:48:56.716044    7894 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0729 16:48:56.716089    7894 start.go:340] cluster config:
{Name:functional-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-905000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 16:48:56.719928    7894 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0729 16:48:56.724676    7894 out.go:177] * Starting "functional-905000" primary control-plane node in "functional-905000" cluster
I0729 16:48:56.731537    7894 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 16:48:56.731560    7894 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 16:48:56.731573    7894 cache.go:56] Caching tarball of preloaded images
I0729 16:48:56.731652    7894 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 16:48:56.731657    7894 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 16:48:56.731722    7894 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/functional-905000/config.json ...
I0729 16:48:56.732251    7894 start.go:360] acquireMachinesLock for functional-905000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 16:48:56.732288    7894 start.go:364] duration metric: took 32.167µs to acquireMachinesLock for "functional-905000"
I0729 16:48:56.732296    7894 start.go:96] Skipping create...Using existing machine configuration
I0729 16:48:56.732301    7894 fix.go:54] fixHost starting: 
I0729 16:48:56.732429    7894 fix.go:112] recreateIfNeeded on functional-905000: state=Stopped err=<nil>
W0729 16:48:56.732435    7894 fix.go:138] unexpected machine state, will restart: <nil>
I0729 16:48:56.740584    7894 out.go:177] * Restarting existing qemu2 VM for "functional-905000" ...
I0729 16:48:56.744528    7894 qemu.go:418] Using hvf for hardware acceleration
I0729 16:48:56.744566    7894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:9a:b6:15:1b:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/disk.qcow2
I0729 16:48:56.746929    7894 main.go:141] libmachine: STDOUT: 
I0729 16:48:56.746947    7894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 16:48:56.746977    7894 fix.go:56] duration metric: took 14.6785ms for fixHost
I0729 16:48:56.746980    7894 start.go:83] releasing machines lock for "functional-905000", held for 14.688417ms
W0729 16:48:56.746989    7894 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 16:48:56.747029    7894 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 16:48:56.747034    7894 start.go:729] Will try again in 5 seconds ...
I0729 16:49:01.749249    7894 start.go:360] acquireMachinesLock for functional-905000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 16:49:01.749699    7894 start.go:364] duration metric: took 349.333µs to acquireMachinesLock for "functional-905000"
I0729 16:49:01.749827    7894 start.go:96] Skipping create...Using existing machine configuration
I0729 16:49:01.749843    7894 fix.go:54] fixHost starting: 
I0729 16:49:01.750560    7894 fix.go:112] recreateIfNeeded on functional-905000: state=Stopped err=<nil>
W0729 16:49:01.750583    7894 fix.go:138] unexpected machine state, will restart: <nil>
I0729 16:49:01.755893    7894 out.go:177] * Restarting existing qemu2 VM for "functional-905000" ...
I0729 16:49:01.763967    7894 qemu.go:418] Using hvf for hardware acceleration
I0729 16:49:01.764199    7894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:9a:b6:15:1b:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/disk.qcow2
I0729 16:49:01.774091    7894 main.go:141] libmachine: STDOUT: 
I0729 16:49:01.774141    7894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 16:49:01.774220    7894 fix.go:56] duration metric: took 24.378167ms for fixHost
I0729 16:49:01.774231    7894 start.go:83] releasing machines lock for "functional-905000", held for 24.516416ms
W0729 16:49:01.774425    7894 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-905000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 16:49:01.780943    7894 out.go:177] 
W0729 16:49:01.783945    7894 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 16:49:01.783970    7894 out.go:239] * 
W0729 16:49:01.785713    7894 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 16:49:01.793920    7894 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3845128938/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
|         | -p download-only-017000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
| delete  | -p download-only-017000                                                  | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
| start   | -o=json --download-only                                                  | download-only-107000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
|         | -p download-only-107000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
| delete  | -p download-only-107000                                                  | download-only-107000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
| start   | -o=json --download-only                                                  | download-only-330000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
|         | -p download-only-330000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| delete  | -p download-only-330000                                                  | download-only-330000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| delete  | -p download-only-017000                                                  | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| delete  | -p download-only-107000                                                  | download-only-107000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| delete  | -p download-only-330000                                                  | download-only-330000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| start   | --download-only -p                                                       | binary-mirror-394000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | binary-mirror-394000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51052                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-394000                                                  | binary-mirror-394000 | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| addons  | enable dashboard -p                                                      | addons-663000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | addons-663000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-663000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | addons-663000                                                            |                      |         |         |                     |                     |
| start   | -p addons-663000 --wait=true                                             | addons-663000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-663000                                                         | addons-663000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| start   | -p nospam-030000 -n=1 --memory=2250 --wait=false                         | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-030000 --log_dir                                                  | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-030000                                                         | nospam-030000        | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| start   | -p functional-905000                                                     | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-905000                                                     | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-905000 cache add                                              | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-905000 cache add                                              | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-905000 cache add                                              | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-905000 cache add                                              | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | minikube-local-cache-test:functional-905000                              |                      |         |         |                     |                     |
| cache   | functional-905000 cache delete                                           | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | minikube-local-cache-test:functional-905000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| ssh     | functional-905000 ssh sudo                                               | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-905000                                                        | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-905000 ssh                                                    | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-905000 cache reload                                           | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
| ssh     | functional-905000 ssh                                                    | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT | 29 Jul 24 16:48 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-905000 kubectl --                                             | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | --context functional-905000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-905000                                                     | functional-905000    | jenkins | v1.33.1 | 29 Jul 24 16:48 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/29 16:48:56
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0729 16:48:56.636378    7894 out.go:291] Setting OutFile to fd 1 ...
I0729 16:48:56.636516    7894 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:48:56.636518    7894 out.go:304] Setting ErrFile to fd 2...
I0729 16:48:56.636519    7894 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:48:56.636622    7894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
I0729 16:48:56.637648    7894 out.go:298] Setting JSON to false
I0729 16:48:56.653472    7894 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4703,"bootTime":1722292233,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0729 16:48:56.653637    7894 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0729 16:48:56.658557    7894 out.go:177] * [functional-905000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0729 16:48:56.666512    7894 out.go:177]   - MINIKUBE_LOCATION=19346
I0729 16:48:56.666564    7894 notify.go:220] Checking for updates...
I0729 16:48:56.673502    7894 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
I0729 16:48:56.677583    7894 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0729 16:48:56.680597    7894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0729 16:48:56.689550    7894 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
I0729 16:48:56.698504    7894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0729 16:48:56.701910    7894 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:48:56.701962    7894 driver.go:392] Setting default libvirt URI to qemu:///system
I0729 16:48:56.706578    7894 out.go:177] * Using the qemu2 driver based on existing profile
I0729 16:48:56.713601    7894 start.go:297] selected driver: qemu2
I0729 16:48:56.713606    7894 start.go:901] validating driver "qemu2" against &{Name:functional-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-905000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 16:48:56.713657    7894 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0729 16:48:56.716010    7894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0729 16:48:56.716034    7894 cni.go:84] Creating CNI manager for ""
I0729 16:48:56.716044    7894 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0729 16:48:56.716089    7894 start.go:340] cluster config:
{Name:functional-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-905000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 16:48:56.719928    7894 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0729 16:48:56.724676    7894 out.go:177] * Starting "functional-905000" primary control-plane node in "functional-905000" cluster
I0729 16:48:56.731537    7894 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 16:48:56.731560    7894 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 16:48:56.731573    7894 cache.go:56] Caching tarball of preloaded images
I0729 16:48:56.731652    7894 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 16:48:56.731657    7894 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 16:48:56.731722    7894 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/functional-905000/config.json ...
I0729 16:48:56.732251    7894 start.go:360] acquireMachinesLock for functional-905000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 16:48:56.732288    7894 start.go:364] duration metric: took 32.167µs to acquireMachinesLock for "functional-905000"
I0729 16:48:56.732296    7894 start.go:96] Skipping create...Using existing machine configuration
I0729 16:48:56.732301    7894 fix.go:54] fixHost starting: 
I0729 16:48:56.732429    7894 fix.go:112] recreateIfNeeded on functional-905000: state=Stopped err=<nil>
W0729 16:48:56.732435    7894 fix.go:138] unexpected machine state, will restart: <nil>
I0729 16:48:56.740584    7894 out.go:177] * Restarting existing qemu2 VM for "functional-905000" ...
I0729 16:48:56.744528    7894 qemu.go:418] Using hvf for hardware acceleration
I0729 16:48:56.744566    7894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:9a:b6:15:1b:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/disk.qcow2
I0729 16:48:56.746929    7894 main.go:141] libmachine: STDOUT: 
I0729 16:48:56.746947    7894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 16:48:56.746977    7894 fix.go:56] duration metric: took 14.6785ms for fixHost
I0729 16:48:56.746980    7894 start.go:83] releasing machines lock for "functional-905000", held for 14.688417ms
W0729 16:48:56.746989    7894 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 16:48:56.747029    7894 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 16:48:56.747034    7894 start.go:729] Will try again in 5 seconds ...
I0729 16:49:01.749249    7894 start.go:360] acquireMachinesLock for functional-905000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 16:49:01.749699    7894 start.go:364] duration metric: took 349.333µs to acquireMachinesLock for "functional-905000"
I0729 16:49:01.749827    7894 start.go:96] Skipping create...Using existing machine configuration
I0729 16:49:01.749843    7894 fix.go:54] fixHost starting: 
I0729 16:49:01.750560    7894 fix.go:112] recreateIfNeeded on functional-905000: state=Stopped err=<nil>
W0729 16:49:01.750583    7894 fix.go:138] unexpected machine state, will restart: <nil>
I0729 16:49:01.755893    7894 out.go:177] * Restarting existing qemu2 VM for "functional-905000" ...
I0729 16:49:01.763967    7894 qemu.go:418] Using hvf for hardware acceleration
I0729 16:49:01.764199    7894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:9a:b6:15:1b:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/functional-905000/disk.qcow2
I0729 16:49:01.774091    7894 main.go:141] libmachine: STDOUT: 
I0729 16:49:01.774141    7894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 16:49:01.774220    7894 fix.go:56] duration metric: took 24.378167ms for fixHost
I0729 16:49:01.774231    7894 start.go:83] releasing machines lock for "functional-905000", held for 24.516416ms
W0729 16:49:01.774425    7894 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-905000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 16:49:01.780943    7894 out.go:177] 
W0729 16:49:01.783945    7894 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 16:49:01.783970    7894 out.go:239] * 
W0729 16:49:01.785713    7894 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 16:49:01.793920    7894 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-905000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-905000 apply -f testdata/invalidsvc.yaml: exit status 1 (26.614834ms)

                                                
                                                
** stderr ** 
	error: context "functional-905000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-905000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-905000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-905000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-905000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-905000 --alsologtostderr -v=1] stderr:
I0729 16:49:37.734498    8092 out.go:291] Setting OutFile to fd 1 ...
I0729 16:49:37.735059    8092 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:37.735062    8092 out.go:304] Setting ErrFile to fd 2...
I0729 16:49:37.735065    8092 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:37.735206    8092 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
I0729 16:49:37.735441    8092 mustload.go:65] Loading cluster: functional-905000
I0729 16:49:37.735643    8092 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:49:37.739816    8092 out.go:177] * The control-plane node functional-905000 host is not running: state=Stopped
I0729 16:49:37.743761    8092 out.go:177]   To start a cluster, run: "minikube start -p functional-905000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (40.94425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 status: exit status 7 (70.586291ms)

                                                
                                                
-- stdout --
	functional-905000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-905000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (33.793792ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-905000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 status -o json: exit status 7 (30.022125ms)

                                                
                                                
-- stdout --
	{"Name":"functional-905000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-905000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (28.812708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-905000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-905000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.281583ms)

                                                
                                                
** stderr ** 
	error: context "functional-905000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-905000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-905000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-905000 describe po hello-node-connect: exit status 1 (26.381375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-905000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-905000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-905000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-905000 logs -l app=hello-node-connect: exit status 1 (27.257542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-905000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-905000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-905000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-905000 describe svc hello-node-connect: exit status 1 (26.318208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-905000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-905000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (30.05075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-905000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (29.679125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "echo hello": exit status 83 (47.869167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-905000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-905000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-905000\"\n"*. args "out/minikube-darwin-arm64 -p functional-905000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "cat /etc/hostname": exit status 83 (40.833125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-905000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-905000"- but got *"* The control-plane node functional-905000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-905000\"\n"*. args "out/minikube-darwin-arm64 -p functional-905000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (36.990292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (53.527ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-905000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh -n functional-905000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh -n functional-905000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.033417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-905000 ssh -n functional-905000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-905000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-905000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cp functional-905000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1138147700/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 cp functional-905000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1138147700/001/cp-test.txt: exit status 83 (48.574916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-905000 cp functional-905000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1138147700/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh -n functional-905000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh -n functional-905000 "sudo cat /home/docker/cp-test.txt": exit status 83 (46.151875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-905000 ssh -n functional-905000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1138147700/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-905000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-905000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (46.561041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-905000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh -n functional-905000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh -n functional-905000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (52.131875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-905000 ssh -n functional-905000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-905000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-905000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7565/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/test/nested/copy/7565/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/test/nested/copy/7565/hosts": exit status 83 (40.353583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/test/nested/copy/7565/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-905000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-905000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (29.11075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7565.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/ssl/certs/7565.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/ssl/certs/7565.pem": exit status 83 (42.483667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/7565.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-905000 ssh \"sudo cat /etc/ssl/certs/7565.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7565.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-905000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-905000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7565.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /usr/share/ca-certificates/7565.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /usr/share/ca-certificates/7565.pem": exit status 83 (43.05175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/7565.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-905000 ssh \"sudo cat /usr/share/ca-certificates/7565.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7565.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-905000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-905000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (43.512542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-905000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-905000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-905000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/75652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/ssl/certs/75652.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/ssl/certs/75652.pem": exit status 83 (39.717541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/75652.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-905000 ssh \"sudo cat /etc/ssl/certs/75652.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/75652.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-905000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-905000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/75652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /usr/share/ca-certificates/75652.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /usr/share/ca-certificates/75652.pem": exit status 83 (42.86625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/75652.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-905000 ssh \"sudo cat /usr/share/ca-certificates/75652.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/75652.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-905000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-905000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (45.548541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-905000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-905000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-905000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (29.603375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-905000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-905000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.967042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-905000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-905000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-905000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-905000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-905000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-905000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-905000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000: exit status 7 (29.745625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "sudo systemctl is-active crio": exit status 83 (42.510042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-905000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-905000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0729 16:49:02.445706    7943 out.go:291] Setting OutFile to fd 1 ...
I0729 16:49:02.445818    7943 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:02.445822    7943 out.go:304] Setting ErrFile to fd 2...
I0729 16:49:02.445824    7943 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:02.445956    7943 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
I0729 16:49:02.446288    7943 mustload.go:65] Loading cluster: functional-905000
I0729 16:49:02.446506    7943 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:49:02.454365    7943 out.go:177] * The control-plane node functional-905000 host is not running: state=Stopped
I0729 16:49:02.462202    7943 out.go:177]   To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
stdout: * The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7942: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-905000": client config: context "functional-905000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (101.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-905000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-905000 get svc nginx-svc: exit status 1 (69.171458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-905000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-905000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (101.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-905000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-905000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.512291ms)

                                                
                                                
** stderr ** 
	error: context "functional-905000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-905000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 service list: exit status 83 (43.646375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-905000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-905000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-905000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 service list -o json: exit status 83 (40.845ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-905000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 service --namespace=default --https --url hello-node: exit status 83 (41.761042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-905000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 service hello-node --url --format={{.IP}}: exit status 83 (42.001959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-905000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-905000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-905000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 service hello-node --url: exit status 83 (39.829625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-905000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
functional_test.go:1565: failed to parse "* The control-plane node functional-905000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-905000\"": parse "* The control-plane node functional-905000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-905000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 version -o=json --components: exit status 83 (38.858708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-905000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-905000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-905000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-905000 image ls --format short --alsologtostderr:
I0729 16:49:42.606801    8214 out.go:291] Setting OutFile to fd 1 ...
I0729 16:49:42.606949    8214 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:42.606952    8214 out.go:304] Setting ErrFile to fd 2...
I0729 16:49:42.606954    8214 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:42.607082    8214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
I0729 16:49:42.607519    8214 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:49:42.607590    8214 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-905000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-905000 image ls --format table --alsologtostderr:
I0729 16:49:42.675100    8218 out.go:291] Setting OutFile to fd 1 ...
I0729 16:49:42.675242    8218 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:42.675246    8218 out.go:304] Setting ErrFile to fd 2...
I0729 16:49:42.675248    8218 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:42.675388    8218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
I0729 16:49:42.675805    8218 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:49:42.675864    8218 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-905000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-905000 image ls --format json --alsologtostderr:
I0729 16:49:42.640940    8216 out.go:291] Setting OutFile to fd 1 ...
I0729 16:49:42.641091    8216 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:42.641094    8216 out.go:304] Setting ErrFile to fd 2...
I0729 16:49:42.641097    8216 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:42.641220    8216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
I0729 16:49:42.641603    8216 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:49:42.641662    8216 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-905000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-905000 image ls --format yaml --alsologtostderr:
I0729 16:49:42.710486    8220 out.go:291] Setting OutFile to fd 1 ...
I0729 16:49:42.710659    8220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:42.710663    8220 out.go:304] Setting ErrFile to fd 2...
I0729 16:49:42.710665    8220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:42.710791    8220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
I0729 16:49:42.711201    8220 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:49:42.711263    8220 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh pgrep buildkitd: exit status 83 (40.874042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image build -t localhost/my-image:functional-905000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-905000 image build -t localhost/my-image:functional-905000 testdata/build --alsologtostderr:
I0729 16:49:42.787507    8224 out.go:291] Setting OutFile to fd 1 ...
I0729 16:49:42.787918    8224 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:42.787922    8224 out.go:304] Setting ErrFile to fd 2...
I0729 16:49:42.787925    8224 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:49:42.788104    8224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
I0729 16:49:42.788502    8224 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:49:42.788978    8224 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:49:42.789213    8224 build_images.go:133] succeeded building to: 
I0729 16:49:42.789217    8224 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls
functional_test.go:442: expected "localhost/my-image:functional-905000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image load --daemon docker.io/kicbase/echo-server:functional-905000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-905000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image load --daemon docker.io/kicbase/echo-server:functional-905000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-905000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-905000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image load --daemon docker.io/kicbase/echo-server:functional-905000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-905000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image save docker.io/kicbase/echo-server:functional-905000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-905000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-905000 docker-env) && out/minikube-darwin-arm64 status -p functional-905000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-905000 docker-env) && out/minikube-darwin-arm64 status -p functional-905000": exit status 1 (42.292167ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 update-context --alsologtostderr -v=2: exit status 83 (39.960667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:49:42.858782    8228 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:49:42.859236    8228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:49:42.859240    8228 out.go:304] Setting ErrFile to fd 2...
	I0729 16:49:42.859243    8228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:49:42.859402    8228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:49:42.859595    8228 mustload.go:65] Loading cluster: functional-905000
	I0729 16:49:42.859784    8228 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:49:42.863323    8228 out.go:177] * The control-plane node functional-905000 host is not running: state=Stopped
	I0729 16:49:42.867372    8228 out.go:177]   To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-905000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-905000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-905000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 update-context --alsologtostderr -v=2: exit status 83 (42.664666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:49:42.940564    8232 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:49:42.940702    8232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:49:42.940705    8232 out.go:304] Setting ErrFile to fd 2...
	I0729 16:49:42.940707    8232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:49:42.940849    8232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:49:42.941100    8232 mustload.go:65] Loading cluster: functional-905000
	I0729 16:49:42.941288    8232 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:49:42.946339    8232 out.go:177] * The control-plane node functional-905000 host is not running: state=Stopped
	I0729 16:49:42.950355    8232 out.go:177]   To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-905000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-905000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-905000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 update-context --alsologtostderr -v=2: exit status 83 (40.396875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:49:42.899261    8230 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:49:42.899395    8230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:49:42.899398    8230 out.go:304] Setting ErrFile to fd 2...
	I0729 16:49:42.899400    8230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:49:42.899517    8230 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:49:42.899738    8230 mustload.go:65] Loading cluster: functional-905000
	I0729 16:49:42.899932    8230 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:49:42.903439    8230 out.go:177] * The control-plane node functional-905000 host is not running: state=Stopped
	I0729 16:49:42.907136    8230 out.go:177]   To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-905000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-905000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-905000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.036335917s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 15 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (34.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (34.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-854000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-854000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.873831s)

                                                
                                                
-- stdout --
	* [ha-854000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-854000" primary control-plane node in "ha-854000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-854000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:51:44.500468    8287 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:51:44.500710    8287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:51:44.500713    8287 out.go:304] Setting ErrFile to fd 2...
	I0729 16:51:44.500715    8287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:51:44.500823    8287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:51:44.501879    8287 out.go:298] Setting JSON to false
	I0729 16:51:44.518080    8287 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4871,"bootTime":1722292233,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:51:44.518152    8287 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:51:44.525701    8287 out.go:177] * [ha-854000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:51:44.533912    8287 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:51:44.533959    8287 notify.go:220] Checking for updates...
	I0729 16:51:44.541853    8287 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:51:44.544845    8287 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:51:44.547821    8287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:51:44.550820    8287 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:51:44.553858    8287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:51:44.556920    8287 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:51:44.559777    8287 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:51:44.566813    8287 start.go:297] selected driver: qemu2
	I0729 16:51:44.566828    8287 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:51:44.566836    8287 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:51:44.569149    8287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:51:44.572757    8287 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:51:44.576092    8287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:51:44.576116    8287 cni.go:84] Creating CNI manager for ""
	I0729 16:51:44.576123    8287 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 16:51:44.576135    8287 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 16:51:44.576164    8287 start.go:340] cluster config:
	{Name:ha-854000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-854000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:51:44.580021    8287 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:51:44.588852    8287 out.go:177] * Starting "ha-854000" primary control-plane node in "ha-854000" cluster
	I0729 16:51:44.592761    8287 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:51:44.592778    8287 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:51:44.592796    8287 cache.go:56] Caching tarball of preloaded images
	I0729 16:51:44.592859    8287 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:51:44.592872    8287 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:51:44.593090    8287 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/ha-854000/config.json ...
	I0729 16:51:44.593102    8287 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/ha-854000/config.json: {Name:mk896e23386b6c379f9fea275ac4f0650520cf0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:51:44.593469    8287 start.go:360] acquireMachinesLock for ha-854000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:51:44.593505    8287 start.go:364] duration metric: took 29.791µs to acquireMachinesLock for "ha-854000"
	I0729 16:51:44.593519    8287 start.go:93] Provisioning new machine with config: &{Name:ha-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-854000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:51:44.593545    8287 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:51:44.601831    8287 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:51:44.620931    8287 start.go:159] libmachine.API.Create for "ha-854000" (driver="qemu2")
	I0729 16:51:44.620960    8287 client.go:168] LocalClient.Create starting
	I0729 16:51:44.621036    8287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 16:51:44.621066    8287 main.go:141] libmachine: Decoding PEM data...
	I0729 16:51:44.621075    8287 main.go:141] libmachine: Parsing certificate...
	I0729 16:51:44.621113    8287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 16:51:44.621137    8287 main.go:141] libmachine: Decoding PEM data...
	I0729 16:51:44.621150    8287 main.go:141] libmachine: Parsing certificate...
	I0729 16:51:44.621568    8287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:51:44.771706    8287 main.go:141] libmachine: Creating SSH key...
	I0729 16:51:44.892778    8287 main.go:141] libmachine: Creating Disk image...
	I0729 16:51:44.892784    8287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:51:44.893016    8287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2
	I0729 16:51:44.902358    8287 main.go:141] libmachine: STDOUT: 
	I0729 16:51:44.902375    8287 main.go:141] libmachine: STDERR: 
	I0729 16:51:44.902433    8287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2 +20000M
	I0729 16:51:44.910189    8287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:51:44.910203    8287 main.go:141] libmachine: STDERR: 
	I0729 16:51:44.910215    8287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2
	I0729 16:51:44.910219    8287 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:51:44.910228    8287 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:51:44.910255    8287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:61:ba:d6:eb:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2
	I0729 16:51:44.911838    8287 main.go:141] libmachine: STDOUT: 
	I0729 16:51:44.911852    8287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:51:44.911869    8287 client.go:171] duration metric: took 290.903541ms to LocalClient.Create
	I0729 16:51:46.914076    8287 start.go:128] duration metric: took 2.320511291s to createHost
	I0729 16:51:46.914137    8287 start.go:83] releasing machines lock for "ha-854000", held for 2.3206235s
	W0729 16:51:46.914199    8287 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:51:46.923371    8287 out.go:177] * Deleting "ha-854000" in qemu2 ...
	W0729 16:51:46.952398    8287 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:51:46.952420    8287 start.go:729] Will try again in 5 seconds ...
	I0729 16:51:51.954672    8287 start.go:360] acquireMachinesLock for ha-854000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:51:51.955281    8287 start.go:364] duration metric: took 399.708µs to acquireMachinesLock for "ha-854000"
	I0729 16:51:51.955415    8287 start.go:93] Provisioning new machine with config: &{Name:ha-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-854000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:51:51.955720    8287 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:51:51.966293    8287 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:51:52.016624    8287 start.go:159] libmachine.API.Create for "ha-854000" (driver="qemu2")
	I0729 16:51:52.016680    8287 client.go:168] LocalClient.Create starting
	I0729 16:51:52.016800    8287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 16:51:52.016857    8287 main.go:141] libmachine: Decoding PEM data...
	I0729 16:51:52.016873    8287 main.go:141] libmachine: Parsing certificate...
	I0729 16:51:52.016940    8287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 16:51:52.016984    8287 main.go:141] libmachine: Decoding PEM data...
	I0729 16:51:52.016997    8287 main.go:141] libmachine: Parsing certificate...
	I0729 16:51:52.017800    8287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:51:52.177492    8287 main.go:141] libmachine: Creating SSH key...
	I0729 16:51:52.279101    8287 main.go:141] libmachine: Creating Disk image...
	I0729 16:51:52.279106    8287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:51:52.279320    8287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2
	I0729 16:51:52.288532    8287 main.go:141] libmachine: STDOUT: 
	I0729 16:51:52.288547    8287 main.go:141] libmachine: STDERR: 
	I0729 16:51:52.288595    8287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2 +20000M
	I0729 16:51:52.296324    8287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:51:52.296337    8287 main.go:141] libmachine: STDERR: 
	I0729 16:51:52.296348    8287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2
	I0729 16:51:52.296351    8287 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:51:52.296362    8287 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:51:52.296396    8287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:de:20:c1:ad:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2
	I0729 16:51:52.298005    8287 main.go:141] libmachine: STDOUT: 
	I0729 16:51:52.298022    8287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:51:52.298035    8287 client.go:171] duration metric: took 281.348834ms to LocalClient.Create
	I0729 16:51:54.300207    8287 start.go:128] duration metric: took 2.344456s to createHost
	I0729 16:51:54.300282    8287 start.go:83] releasing machines lock for "ha-854000", held for 2.344973958s
	W0729 16:51:54.300711    8287 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-854000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-854000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:51:54.313383    8287 out.go:177] 
	W0729 16:51:54.316462    8287 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:51:54.316492    8287 out.go:239] * 
	* 
	W0729 16:51:54.319502    8287 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:51:54.330319    8287 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-854000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (67.219334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (120.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (58.095334ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-854000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- rollout status deployment/busybox: exit status 1 (57.295625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.639209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.689542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.034ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.989ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.38025ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.989083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.0385ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.531792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.005583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.412417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.890833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.802458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.71125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.562ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.854ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (30.26625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (120.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-854000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.324459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-854000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (29.838375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-854000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-854000 -v=7 --alsologtostderr: exit status 83 (38.956ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-854000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:54.552831    8392 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:54.553244    8392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:54.553247    8392 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:54.553250    8392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:54.553418    8392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:53:54.553646    8392 mustload.go:65] Loading cluster: ha-854000
	I0729 16:53:54.553840    8392 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:54.557655    8392 out.go:177] * The control-plane node ha-854000 host is not running: state=Stopped
	I0729 16:53:54.560426    8392 out.go:177]   To start a cluster, run: "minikube start -p ha-854000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-854000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (29.389167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-854000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-854000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.137459ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-854000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-854000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-854000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (30.067791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-854000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-854000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-854000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-854000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-854000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-854000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-854000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-854000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (29.07775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 status --output json -v=7 --alsologtostderr: exit status 7 (29.520917ms)

                                                
                                                
-- stdout --
	{"Name":"ha-854000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:54.755450    8404 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:54.755576    8404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:54.755579    8404 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:54.755581    8404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:54.755706    8404 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:53:54.755822    8404 out.go:298] Setting JSON to true
	I0729 16:53:54.755838    8404 mustload.go:65] Loading cluster: ha-854000
	I0729 16:53:54.755877    8404 notify.go:220] Checking for updates...
	I0729 16:53:54.756036    8404 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:54.756043    8404 status.go:255] checking status of ha-854000 ...
	I0729 16:53:54.756242    8404 status.go:330] ha-854000 host status = "Stopped" (err=<nil>)
	I0729 16:53:54.756246    8404 status.go:343] host is not running, skipping remaining checks
	I0729 16:53:54.756248    8404 status.go:257] ha-854000 status: &{Name:ha-854000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-854000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (29.242708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 node stop m02 -v=7 --alsologtostderr: exit status 85 (46.181584ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:54.815200    8408 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:54.815799    8408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:54.815804    8408 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:54.815807    8408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:54.815969    8408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:53:54.816204    8408 mustload.go:65] Loading cluster: ha-854000
	I0729 16:53:54.816401    8408 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:54.820505    8408 out.go:177] 
	W0729 16:53:54.823327    8408 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0729 16:53:54.823332    8408 out.go:239] * 
	* 
	W0729 16:53:54.825309    8408 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:53:54.829323    8408 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-854000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr: exit status 7 (30.391709ms)

                                                
                                                
-- stdout --
	ha-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:54.861235    8410 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:54.861376    8410 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:54.861379    8410 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:54.861382    8410 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:54.861514    8410 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:53:54.861643    8410 out.go:298] Setting JSON to false
	I0729 16:53:54.861651    8410 mustload.go:65] Loading cluster: ha-854000
	I0729 16:53:54.861905    8410 notify.go:220] Checking for updates...
	I0729 16:53:54.862255    8410 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:54.862270    8410 status.go:255] checking status of ha-854000 ...
	I0729 16:53:54.862757    8410 status.go:330] ha-854000 host status = "Stopped" (err=<nil>)
	I0729 16:53:54.862761    8410 status.go:343] host is not running, skipping remaining checks
	I0729 16:53:54.862764    8410 status.go:257] ha-854000 status: &{Name:ha-854000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr": ha-854000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr": ha-854000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr": ha-854000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr": ha-854000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (29.786542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-854000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-854000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-854000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-854000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (30.457792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (54.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 node start m02 -v=7 --alsologtostderr: exit status 85 (47.0075ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:54.998532    8419 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:54.998915    8419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:54.998919    8419 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:54.998921    8419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:54.999084    8419 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:53:54.999324    8419 mustload.go:65] Loading cluster: ha-854000
	I0729 16:53:54.999492    8419 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:55.003391    8419 out.go:177] 
	W0729 16:53:55.007294    8419 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0729 16:53:55.007298    8419 out.go:239] * 
	* 
	W0729 16:53:55.009295    8419 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:53:55.013276    8419 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0729 16:53:54.998532    8419 out.go:291] Setting OutFile to fd 1 ...
I0729 16:53:54.998915    8419 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:53:54.998919    8419 out.go:304] Setting ErrFile to fd 2...
I0729 16:53:54.998921    8419 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:53:54.999084    8419 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
I0729 16:53:54.999324    8419 mustload.go:65] Loading cluster: ha-854000
I0729 16:53:54.999492    8419 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:53:55.003391    8419 out.go:177] 
W0729 16:53:55.007294    8419 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0729 16:53:55.007298    8419 out.go:239] * 
* 
W0729 16:53:55.009295    8419 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 16:53:55.013276    8419 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-854000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr: exit status 7 (29.264416ms)

                                                
                                                
-- stdout --
	ha-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:55.045873    8421 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:55.046005    8421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:55.046009    8421 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:55.046011    8421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:55.046132    8421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:53:55.046247    8421 out.go:298] Setting JSON to false
	I0729 16:53:55.046256    8421 mustload.go:65] Loading cluster: ha-854000
	I0729 16:53:55.046322    8421 notify.go:220] Checking for updates...
	I0729 16:53:55.046458    8421 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:55.046464    8421 status.go:255] checking status of ha-854000 ...
	I0729 16:53:55.046674    8421 status.go:330] ha-854000 host status = "Stopped" (err=<nil>)
	I0729 16:53:55.046677    8421 status.go:343] host is not running, skipping remaining checks
	I0729 16:53:55.046680    8421 status.go:257] ha-854000 status: &{Name:ha-854000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr: exit status 7 (73.846083ms)

                                                
                                                
-- stdout --
	ha-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:55.883529    8423 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:55.883729    8423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:55.883733    8423 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:55.883736    8423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:55.883935    8423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:53:55.884098    8423 out.go:298] Setting JSON to false
	I0729 16:53:55.884111    8423 mustload.go:65] Loading cluster: ha-854000
	I0729 16:53:55.884148    8423 notify.go:220] Checking for updates...
	I0729 16:53:55.884365    8423 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:55.884375    8423 status.go:255] checking status of ha-854000 ...
	I0729 16:53:55.884688    8423 status.go:330] ha-854000 host status = "Stopped" (err=<nil>)
	I0729 16:53:55.884693    8423 status.go:343] host is not running, skipping remaining checks
	I0729 16:53:55.884696    8423 status.go:257] ha-854000 status: &{Name:ha-854000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr: exit status 7 (73.606125ms)

                                                
                                                
-- stdout --
	ha-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:56.952965    8425 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:56.953137    8425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:56.953142    8425 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:56.953145    8425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:56.953312    8425 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:53:56.953508    8425 out.go:298] Setting JSON to false
	I0729 16:53:56.953519    8425 mustload.go:65] Loading cluster: ha-854000
	I0729 16:53:56.953557    8425 notify.go:220] Checking for updates...
	I0729 16:53:56.953768    8425 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:56.953777    8425 status.go:255] checking status of ha-854000 ...
	I0729 16:53:56.954056    8425 status.go:330] ha-854000 host status = "Stopped" (err=<nil>)
	I0729 16:53:56.954061    8425 status.go:343] host is not running, skipping remaining checks
	I0729 16:53:56.954064    8425 status.go:257] ha-854000 status: &{Name:ha-854000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr: exit status 7 (72.862167ms)

                                                
                                                
-- stdout --
	ha-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:53:59.399505    8427 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:53:59.399684    8427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:59.399688    8427 out.go:304] Setting ErrFile to fd 2...
	I0729 16:53:59.399691    8427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:53:59.399863    8427 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:53:59.400022    8427 out.go:298] Setting JSON to false
	I0729 16:53:59.400033    8427 mustload.go:65] Loading cluster: ha-854000
	I0729 16:53:59.400084    8427 notify.go:220] Checking for updates...
	I0729 16:53:59.400302    8427 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:53:59.400311    8427 status.go:255] checking status of ha-854000 ...
	I0729 16:53:59.400614    8427 status.go:330] ha-854000 host status = "Stopped" (err=<nil>)
	I0729 16:53:59.400619    8427 status.go:343] host is not running, skipping remaining checks
	I0729 16:53:59.400622    8427 status.go:257] ha-854000 status: &{Name:ha-854000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr: exit status 7 (73.147958ms)

                                                
                                                
-- stdout --
	ha-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:54:04.181545    8429 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:54:04.181737    8429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:04.181742    8429 out.go:304] Setting ErrFile to fd 2...
	I0729 16:54:04.181744    8429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:04.181897    8429 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:54:04.182056    8429 out.go:298] Setting JSON to false
	I0729 16:54:04.182069    8429 mustload.go:65] Loading cluster: ha-854000
	I0729 16:54:04.182105    8429 notify.go:220] Checking for updates...
	I0729 16:54:04.182302    8429 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:54:04.182311    8429 status.go:255] checking status of ha-854000 ...
	I0729 16:54:04.182648    8429 status.go:330] ha-854000 host status = "Stopped" (err=<nil>)
	I0729 16:54:04.182659    8429 status.go:343] host is not running, skipping remaining checks
	I0729 16:54:04.182662    8429 status.go:257] ha-854000 status: &{Name:ha-854000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr: exit status 7 (72.846125ms)

                                                
                                                
-- stdout --
	ha-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:54:07.415256    8431 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:54:07.415453    8431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:07.415457    8431 out.go:304] Setting ErrFile to fd 2...
	I0729 16:54:07.415460    8431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:07.415661    8431 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:54:07.415824    8431 out.go:298] Setting JSON to false
	I0729 16:54:07.415836    8431 mustload.go:65] Loading cluster: ha-854000
	I0729 16:54:07.415882    8431 notify.go:220] Checking for updates...
	I0729 16:54:07.416119    8431 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:54:07.416129    8431 status.go:255] checking status of ha-854000 ...
	I0729 16:54:07.416411    8431 status.go:330] ha-854000 host status = "Stopped" (err=<nil>)
	I0729 16:54:07.416416    8431 status.go:343] host is not running, skipping remaining checks
	I0729 16:54:07.416419    8431 status.go:257] ha-854000 status: &{Name:ha-854000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr: exit status 7 (73.810625ms)

                                                
                                                
-- stdout --
	ha-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:54:16.118002    8433 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:54:16.118233    8433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:16.118238    8433 out.go:304] Setting ErrFile to fd 2...
	I0729 16:54:16.118242    8433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:16.118442    8433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:54:16.118623    8433 out.go:298] Setting JSON to false
	I0729 16:54:16.118641    8433 mustload.go:65] Loading cluster: ha-854000
	I0729 16:54:16.118697    8433 notify.go:220] Checking for updates...
	I0729 16:54:16.118954    8433 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:54:16.118966    8433 status.go:255] checking status of ha-854000 ...
	I0729 16:54:16.119270    8433 status.go:330] ha-854000 host status = "Stopped" (err=<nil>)
	I0729 16:54:16.119276    8433 status.go:343] host is not running, skipping remaining checks
	I0729 16:54:16.119279    8433 status.go:257] ha-854000 status: &{Name:ha-854000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr: exit status 7 (72.436833ms)

                                                
                                                
-- stdout --
	ha-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:54:28.101543    8435 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:54:28.101739    8435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:28.101744    8435 out.go:304] Setting ErrFile to fd 2...
	I0729 16:54:28.101748    8435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:28.101958    8435 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:54:28.102136    8435 out.go:298] Setting JSON to false
	I0729 16:54:28.102156    8435 mustload.go:65] Loading cluster: ha-854000
	I0729 16:54:28.102205    8435 notify.go:220] Checking for updates...
	I0729 16:54:28.102426    8435 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:54:28.102435    8435 status.go:255] checking status of ha-854000 ...
	I0729 16:54:28.102774    8435 status.go:330] ha-854000 host status = "Stopped" (err=<nil>)
	I0729 16:54:28.102779    8435 status.go:343] host is not running, skipping remaining checks
	I0729 16:54:28.102782    8435 status.go:257] ha-854000 status: &{Name:ha-854000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr: exit status 7 (71.199708ms)

                                                
                                                
-- stdout --
	ha-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:54:49.195204    8442 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:54:49.195438    8442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:49.195442    8442 out.go:304] Setting ErrFile to fd 2...
	I0729 16:54:49.195446    8442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:49.195647    8442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:54:49.195827    8442 out.go:298] Setting JSON to false
	I0729 16:54:49.195840    8442 mustload.go:65] Loading cluster: ha-854000
	I0729 16:54:49.195893    8442 notify.go:220] Checking for updates...
	I0729 16:54:49.196153    8442 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:54:49.196166    8442 status.go:255] checking status of ha-854000 ...
	I0729 16:54:49.196460    8442 status.go:330] ha-854000 host status = "Stopped" (err=<nil>)
	I0729 16:54:49.196465    8442 status.go:343] host is not running, skipping remaining checks
	I0729 16:54:49.196468    8442 status.go:257] ha-854000 status: &{Name:ha-854000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (33.489333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (54.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-854000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-854000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-854000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-854000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-854000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-854000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-854000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-854000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (29.158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-854000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-854000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-854000 -v=7 --alsologtostderr: (3.5491725s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-854000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-854000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.224699292s)

                                                
                                                
-- stdout --
	* [ha-854000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-854000" primary control-plane node in "ha-854000" cluster
	* Restarting existing qemu2 VM for "ha-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:54:52.953583    8471 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:54:52.953772    8471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:52.953777    8471 out.go:304] Setting ErrFile to fd 2...
	I0729 16:54:52.953780    8471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:52.953964    8471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:54:52.955243    8471 out.go:298] Setting JSON to false
	I0729 16:54:52.974939    8471 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5059,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:54:52.975006    8471 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:54:52.980285    8471 out.go:177] * [ha-854000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:54:52.987232    8471 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:54:52.987273    8471 notify.go:220] Checking for updates...
	I0729 16:54:52.995245    8471 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:54:52.999128    8471 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:54:53.002200    8471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:54:53.005251    8471 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:54:53.008141    8471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:54:53.011448    8471 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:54:53.011503    8471 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:54:53.016189    8471 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:54:53.023121    8471 start.go:297] selected driver: qemu2
	I0729 16:54:53.023126    8471 start.go:901] validating driver "qemu2" against &{Name:ha-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-854000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:54:53.023177    8471 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:54:53.025660    8471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:54:53.025709    8471 cni.go:84] Creating CNI manager for ""
	I0729 16:54:53.025714    8471 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 16:54:53.025781    8471 start.go:340] cluster config:
	{Name:ha-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-854000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:54:53.029545    8471 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:54:53.038209    8471 out.go:177] * Starting "ha-854000" primary control-plane node in "ha-854000" cluster
	I0729 16:54:53.042202    8471 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:54:53.042219    8471 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:54:53.042232    8471 cache.go:56] Caching tarball of preloaded images
	I0729 16:54:53.042298    8471 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:54:53.042307    8471 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:54:53.042371    8471 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/ha-854000/config.json ...
	I0729 16:54:53.042825    8471 start.go:360] acquireMachinesLock for ha-854000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:54:53.042862    8471 start.go:364] duration metric: took 30.416µs to acquireMachinesLock for "ha-854000"
	I0729 16:54:53.042871    8471 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:54:53.042880    8471 fix.go:54] fixHost starting: 
	I0729 16:54:53.043004    8471 fix.go:112] recreateIfNeeded on ha-854000: state=Stopped err=<nil>
	W0729 16:54:53.043015    8471 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:54:53.047181    8471 out.go:177] * Restarting existing qemu2 VM for "ha-854000" ...
	I0729 16:54:53.055047    8471 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:54:53.055083    8471 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:de:20:c1:ad:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2
	I0729 16:54:53.057287    8471 main.go:141] libmachine: STDOUT: 
	I0729 16:54:53.057308    8471 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:54:53.057339    8471 fix.go:56] duration metric: took 14.461ms for fixHost
	I0729 16:54:53.057345    8471 start.go:83] releasing machines lock for "ha-854000", held for 14.47825ms
	W0729 16:54:53.057352    8471 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:54:53.057406    8471 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:53.057411    8471 start.go:729] Will try again in 5 seconds ...
	I0729 16:54:58.059538    8471 start.go:360] acquireMachinesLock for ha-854000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:54:58.060033    8471 start.go:364] duration metric: took 365.084µs to acquireMachinesLock for "ha-854000"
	I0729 16:54:58.060195    8471 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:54:58.060214    8471 fix.go:54] fixHost starting: 
	I0729 16:54:58.060920    8471 fix.go:112] recreateIfNeeded on ha-854000: state=Stopped err=<nil>
	W0729 16:54:58.060947    8471 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:54:58.065356    8471 out.go:177] * Restarting existing qemu2 VM for "ha-854000" ...
	I0729 16:54:58.069358    8471 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:54:58.069601    8471 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:de:20:c1:ad:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2
	I0729 16:54:58.078163    8471 main.go:141] libmachine: STDOUT: 
	I0729 16:54:58.078224    8471 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:54:58.078290    8471 fix.go:56] duration metric: took 18.074917ms for fixHost
	I0729 16:54:58.078312    8471 start.go:83] releasing machines lock for "ha-854000", held for 18.217417ms
	W0729 16:54:58.078457    8471 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-854000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-854000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:54:58.085241    8471 out.go:177] 
	W0729 16:54:58.088319    8471 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:54:58.088373    8471 out.go:239] * 
	* 
	W0729 16:54:58.090950    8471 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:54:58.099222    8471 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-854000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-854000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (32.395459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.072833ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-854000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:54:58.243247    8483 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:54:58.243649    8483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:58.243653    8483 out.go:304] Setting ErrFile to fd 2...
	I0729 16:54:58.243661    8483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:58.243810    8483 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:54:58.244020    8483 mustload.go:65] Loading cluster: ha-854000
	I0729 16:54:58.244211    8483 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:54:58.248179    8483 out.go:177] * The control-plane node ha-854000 host is not running: state=Stopped
	I0729 16:54:58.251198    8483 out.go:177]   To start a cluster, run: "minikube start -p ha-854000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-854000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr: exit status 7 (29.279166ms)

                                                
                                                
-- stdout --
	ha-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:54:58.282770    8485 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:54:58.282981    8485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:58.282984    8485 out.go:304] Setting ErrFile to fd 2...
	I0729 16:54:58.282987    8485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:54:58.283121    8485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:54:58.283249    8485 out.go:298] Setting JSON to false
	I0729 16:54:58.283258    8485 mustload.go:65] Loading cluster: ha-854000
	I0729 16:54:58.283325    8485 notify.go:220] Checking for updates...
	I0729 16:54:58.283461    8485 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:54:58.283468    8485 status.go:255] checking status of ha-854000 ...
	I0729 16:54:58.283682    8485 status.go:330] ha-854000 host status = "Stopped" (err=<nil>)
	I0729 16:54:58.283686    8485 status.go:343] host is not running, skipping remaining checks
	I0729 16:54:58.283688    8485 status.go:257] ha-854000 status: &{Name:ha-854000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (30.125209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-854000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-854000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-854000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-854000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (29.584958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-854000 stop -v=7 --alsologtostderr: (1.882494417s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr: exit status 7 (63.699125ms)

                                                
                                                
-- stdout --
	ha-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:00.334369    8504 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:00.334575    8504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:00.334579    8504 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:00.334582    8504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:00.334740    8504 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:55:00.334884    8504 out.go:298] Setting JSON to false
	I0729 16:55:00.334895    8504 mustload.go:65] Loading cluster: ha-854000
	I0729 16:55:00.334935    8504 notify.go:220] Checking for updates...
	I0729 16:55:00.335174    8504 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:55:00.335182    8504 status.go:255] checking status of ha-854000 ...
	I0729 16:55:00.335478    8504 status.go:330] ha-854000 host status = "Stopped" (err=<nil>)
	I0729 16:55:00.335483    8504 status.go:343] host is not running, skipping remaining checks
	I0729 16:55:00.335486    8504 status.go:257] ha-854000 status: &{Name:ha-854000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr": ha-854000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr": ha-854000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-854000 status -v=7 --alsologtostderr": ha-854000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (32.534417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-854000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-854000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.18731475s)

                                                
                                                
-- stdout --
	* [ha-854000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-854000" primary control-plane node in "ha-854000" cluster
	* Restarting existing qemu2 VM for "ha-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:00.398279    8508 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:00.398417    8508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:00.398420    8508 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:00.398423    8508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:00.398544    8508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:55:00.399615    8508 out.go:298] Setting JSON to false
	I0729 16:55:00.416552    8508 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5067,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:55:00.416619    8508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:55:00.421554    8508 out.go:177] * [ha-854000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:55:00.429472    8508 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:55:00.429539    8508 notify.go:220] Checking for updates...
	I0729 16:55:00.436423    8508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:55:00.439409    8508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:55:00.442428    8508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:55:00.445425    8508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:55:00.448440    8508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:55:00.451737    8508 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:55:00.452002    8508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:55:00.455443    8508 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:55:00.462470    8508 start.go:297] selected driver: qemu2
	I0729 16:55:00.462478    8508 start.go:901] validating driver "qemu2" against &{Name:ha-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-854000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:00.462549    8508 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:55:00.464927    8508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:55:00.464975    8508 cni.go:84] Creating CNI manager for ""
	I0729 16:55:00.464979    8508 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 16:55:00.465027    8508 start.go:340] cluster config:
	{Name:ha-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-854000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:00.468580    8508 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:00.477365    8508 out.go:177] * Starting "ha-854000" primary control-plane node in "ha-854000" cluster
	I0729 16:55:00.481331    8508 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:55:00.481353    8508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:55:00.481365    8508 cache.go:56] Caching tarball of preloaded images
	I0729 16:55:00.481420    8508 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:55:00.481425    8508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:55:00.481469    8508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/ha-854000/config.json ...
	I0729 16:55:00.481890    8508 start.go:360] acquireMachinesLock for ha-854000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:00.481919    8508 start.go:364] duration metric: took 22.834µs to acquireMachinesLock for "ha-854000"
	I0729 16:55:00.481927    8508 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:55:00.481932    8508 fix.go:54] fixHost starting: 
	I0729 16:55:00.482044    8508 fix.go:112] recreateIfNeeded on ha-854000: state=Stopped err=<nil>
	W0729 16:55:00.482053    8508 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:55:00.489409    8508 out.go:177] * Restarting existing qemu2 VM for "ha-854000" ...
	I0729 16:55:00.493419    8508 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:00.493465    8508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:de:20:c1:ad:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2
	I0729 16:55:00.495518    8508 main.go:141] libmachine: STDOUT: 
	I0729 16:55:00.495539    8508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:00.495567    8508 fix.go:56] duration metric: took 13.635166ms for fixHost
	I0729 16:55:00.495571    8508 start.go:83] releasing machines lock for "ha-854000", held for 13.648ms
	W0729 16:55:00.495578    8508 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:00.495610    8508 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:00.495615    8508 start.go:729] Will try again in 5 seconds ...
	I0729 16:55:05.497853    8508 start.go:360] acquireMachinesLock for ha-854000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:05.498251    8508 start.go:364] duration metric: took 300.041µs to acquireMachinesLock for "ha-854000"
	I0729 16:55:05.498371    8508 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:55:05.498394    8508 fix.go:54] fixHost starting: 
	I0729 16:55:05.499033    8508 fix.go:112] recreateIfNeeded on ha-854000: state=Stopped err=<nil>
	W0729 16:55:05.499056    8508 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:55:05.507470    8508 out.go:177] * Restarting existing qemu2 VM for "ha-854000" ...
	I0729 16:55:05.511446    8508 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:05.511758    8508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:de:20:c1:ad:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/ha-854000/disk.qcow2
	I0729 16:55:05.520723    8508 main.go:141] libmachine: STDOUT: 
	I0729 16:55:05.520776    8508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:05.520847    8508 fix.go:56] duration metric: took 22.458042ms for fixHost
	I0729 16:55:05.520864    8508 start.go:83] releasing machines lock for "ha-854000", held for 22.58975ms
	W0729 16:55:05.521015    8508 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-854000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-854000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:05.528490    8508 out.go:177] 
	W0729 16:55:05.532561    8508 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:05.532584    8508 out.go:239] * 
	* 
	W0729 16:55:05.535081    8508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:55:05.543502    8508 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-854000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (69.693833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-854000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-854000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-854000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-854000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (29.635875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-854000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-854000 --control-plane -v=7 --alsologtostderr: exit status 83 (43.954791ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-854000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:05.735719    8523 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:05.735853    8523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:05.735860    8523 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:05.735862    8523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:05.735991    8523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:55:05.736233    8523 mustload.go:65] Loading cluster: ha-854000
	I0729 16:55:05.736425    8523 config.go:182] Loaded profile config "ha-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:55:05.740828    8523 out.go:177] * The control-plane node ha-854000 host is not running: state=Stopped
	I0729 16:55:05.747802    8523 out.go:177]   To start a cluster, run: "minikube start -p ha-854000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-854000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (30.363417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-854000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-854000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-854000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-854000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-854000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-854000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-854000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-854000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-854000 -n ha-854000: exit status 7 (29.045708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-619000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-619000 --driver=qemu2 : exit status 80 (9.85723875s)

                                                
                                                
-- stdout --
	* [image-619000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-619000" primary control-plane node in "image-619000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-619000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-619000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-619000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-619000 -n image-619000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-619000 -n image-619000: exit status 7 (68.288209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-619000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-832000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-832000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.986841709s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bcebfcd9-e5fa-4a73-b74d-e960b79b9512","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-832000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3f7ab3a-dca2-4d2d-9e44-4ca2978b8206","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19346"}}
	{"specversion":"1.0","id":"daa499d2-f2ea-46dc-bdc7-a9f5683afe5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig"}}
	{"specversion":"1.0","id":"f6c5ab7d-6657-486b-9541-cdf2aa7b8ae4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4503eb45-199b-469b-a949-069a3d37251d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"61072c6b-5ba7-4865-b409-dcb8866d8299","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube"}}
	{"specversion":"1.0","id":"a89670d0-5e43-4ed5-94a7-8959bd7dbe48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f9b1317b-11e8-4ba8-9ec4-0a37891bf749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"43f5f240-4644-4af9-9551-6c006cc33823","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"7900e39a-b96f-4a21-851c-d7f210fea536","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-832000\" primary control-plane node in \"json-output-832000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c2ef1708-78bb-4af2-a282-00a053841d4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"3b83f5b4-4385-4a07-89f9-16c6782fe90b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-832000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"c95f698e-3758-40eb-b7e6-9420972211db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"fdb29222-b0b4-4c04-bee2-686c8251f291","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"86d1f7b4-5f0b-4c48-8336-6f5a21f87ff0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-832000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"72668fd0-8e7f-4907-ad2e-f04c6775ccf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"5a6663ea-8b2f-43d4-af93-7c4cac0ec660","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-832000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.99s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-832000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-832000 --output=json --user=testUser: exit status 83 (79.429541ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0569dbb6-12d8-43db-942e-c40150212b07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-832000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"bfc9b970-6bad-4e5a-82f4-a71ef227a362","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-832000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-832000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-832000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-832000 --output=json --user=testUser: exit status 83 (51.076375ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-832000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-832000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-832000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.27s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-687000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-687000 --driver=qemu2 : exit status 80 (9.976664125s)

                                                
                                                
-- stdout --
	* [first-687000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-687000" primary control-plane node in "first-687000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-687000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-687000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 16:55:39.544107 -0700 PDT m=+496.850285585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-689000 -n second-689000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-689000 -n second-689000: exit status 85 (83.072125ms)

                                                
                                                
-- stdout --
	* Profile "second-689000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-689000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-689000" host is not running, skipping log retrieval (state="* Profile \"second-689000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-689000\"")
helpers_test.go:175: Cleaning up "second-689000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-689000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 16:55:39.735091 -0700 PDT m=+497.041269418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-687000 -n first-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-687000 -n first-687000: exit status 7 (28.993875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-687000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-687000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-687000
--- FAIL: TestMinikubeProfile (10.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-055000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-055000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.901918375s)

                                                
                                                
-- stdout --
	* [mount-start-1-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-055000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-055000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-055000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-055000 -n mount-start-1-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-055000 -n mount-start-1-055000: exit status 7 (69.290542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-055000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-877000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-877000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.764434666s)

                                                
                                                
-- stdout --
	* [multinode-877000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-877000" primary control-plane node in "multinode-877000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-877000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:55:50.020791    8670 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:55:50.020909    8670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:50.020913    8670 out.go:304] Setting ErrFile to fd 2...
	I0729 16:55:50.020915    8670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:55:50.021058    8670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:55:50.022138    8670 out.go:298] Setting JSON to false
	I0729 16:55:50.038130    8670 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5117,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:55:50.038193    8670 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:55:50.043975    8670 out.go:177] * [multinode-877000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:55:50.051930    8670 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:55:50.051979    8670 notify.go:220] Checking for updates...
	I0729 16:55:50.057941    8670 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:55:50.061902    8670 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:55:50.065013    8670 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:55:50.067892    8670 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:55:50.070980    8670 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:55:50.074099    8670 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:55:50.077916    8670 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:55:50.084957    8670 start.go:297] selected driver: qemu2
	I0729 16:55:50.084964    8670 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:55:50.084970    8670 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:55:50.087188    8670 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:55:50.089930    8670 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:55:50.093070    8670 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:55:50.093110    8670 cni.go:84] Creating CNI manager for ""
	I0729 16:55:50.093117    8670 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 16:55:50.093122    8670 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 16:55:50.093168    8670 start.go:340] cluster config:
	{Name:multinode-877000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:55:50.096860    8670 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:55:50.104911    8670 out.go:177] * Starting "multinode-877000" primary control-plane node in "multinode-877000" cluster
	I0729 16:55:50.108960    8670 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:55:50.108977    8670 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:55:50.108990    8670 cache.go:56] Caching tarball of preloaded images
	I0729 16:55:50.109060    8670 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:55:50.109067    8670 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:55:50.109279    8670 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/multinode-877000/config.json ...
	I0729 16:55:50.109291    8670 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/multinode-877000/config.json: {Name:mke251e8823f6689cbf49d84de0b68c574ed938e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:55:50.109521    8670 start.go:360] acquireMachinesLock for multinode-877000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:50.109557    8670 start.go:364] duration metric: took 29.292µs to acquireMachinesLock for "multinode-877000"
	I0729 16:55:50.109568    8670 start.go:93] Provisioning new machine with config: &{Name:multinode-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:55:50.109601    8670 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:55:50.116955    8670 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:55:50.134817    8670 start.go:159] libmachine.API.Create for "multinode-877000" (driver="qemu2")
	I0729 16:55:50.134848    8670 client.go:168] LocalClient.Create starting
	I0729 16:55:50.134910    8670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 16:55:50.134942    8670 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:50.134954    8670 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:50.134991    8670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 16:55:50.135018    8670 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:50.135031    8670 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:50.135373    8670 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:55:50.285612    8670 main.go:141] libmachine: Creating SSH key...
	I0729 16:55:50.349371    8670 main.go:141] libmachine: Creating Disk image...
	I0729 16:55:50.349376    8670 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:55:50.349595    8670 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2
	I0729 16:55:50.358796    8670 main.go:141] libmachine: STDOUT: 
	I0729 16:55:50.358816    8670 main.go:141] libmachine: STDERR: 
	I0729 16:55:50.358864    8670 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2 +20000M
	I0729 16:55:50.366709    8670 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:55:50.366731    8670 main.go:141] libmachine: STDERR: 
	I0729 16:55:50.366744    8670 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2
	I0729 16:55:50.366748    8670 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:55:50.366758    8670 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:50.366783    8670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b6:ba:a6:cc:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2
	I0729 16:55:50.368446    8670 main.go:141] libmachine: STDOUT: 
	I0729 16:55:50.368458    8670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:50.368475    8670 client.go:171] duration metric: took 233.623875ms to LocalClient.Create
	I0729 16:55:52.370662    8670 start.go:128] duration metric: took 2.261040292s to createHost
	I0729 16:55:52.370742    8670 start.go:83] releasing machines lock for "multinode-877000", held for 2.261173125s
	W0729 16:55:52.370853    8670 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:52.382492    8670 out.go:177] * Deleting "multinode-877000" in qemu2 ...
	W0729 16:55:52.412222    8670 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:52.412251    8670 start.go:729] Will try again in 5 seconds ...
	I0729 16:55:57.414408    8670 start.go:360] acquireMachinesLock for multinode-877000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:55:57.414845    8670 start.go:364] duration metric: took 347.667µs to acquireMachinesLock for "multinode-877000"
	I0729 16:55:57.415000    8670 start.go:93] Provisioning new machine with config: &{Name:multinode-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:55:57.415302    8670 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:55:57.429168    8670 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:55:57.478952    8670 start.go:159] libmachine.API.Create for "multinode-877000" (driver="qemu2")
	I0729 16:55:57.478998    8670 client.go:168] LocalClient.Create starting
	I0729 16:55:57.479119    8670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 16:55:57.479182    8670 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:57.479200    8670 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:57.479258    8670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 16:55:57.479304    8670 main.go:141] libmachine: Decoding PEM data...
	I0729 16:55:57.479316    8670 main.go:141] libmachine: Parsing certificate...
	I0729 16:55:57.479858    8670 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:55:57.642178    8670 main.go:141] libmachine: Creating SSH key...
	I0729 16:55:57.690483    8670 main.go:141] libmachine: Creating Disk image...
	I0729 16:55:57.690491    8670 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:55:57.690688    8670 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2
	I0729 16:55:57.699545    8670 main.go:141] libmachine: STDOUT: 
	I0729 16:55:57.699560    8670 main.go:141] libmachine: STDERR: 
	I0729 16:55:57.699616    8670 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2 +20000M
	I0729 16:55:57.707355    8670 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:55:57.707370    8670 main.go:141] libmachine: STDERR: 
	I0729 16:55:57.707380    8670 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2
	I0729 16:55:57.707385    8670 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:55:57.707395    8670 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:55:57.707418    8670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:66:e7:0c:b5:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2
	I0729 16:55:57.708977    8670 main.go:141] libmachine: STDOUT: 
	I0729 16:55:57.708991    8670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:55:57.709002    8670 client.go:171] duration metric: took 229.998625ms to LocalClient.Create
	I0729 16:55:59.711194    8670 start.go:128] duration metric: took 2.295839708s to createHost
	I0729 16:55:59.711270    8670 start.go:83] releasing machines lock for "multinode-877000", held for 2.296387417s
	W0729 16:55:59.711695    8670 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-877000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-877000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:55:59.726384    8670 out.go:177] 
	W0729 16:55:59.730498    8670 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:55:59.730524    8670 out.go:239] * 
	* 
	W0729 16:55:59.733282    8670 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:55:59.743385    8670 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-877000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (67.142917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (116.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.503333ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-877000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- rollout status deployment/busybox: exit status 1 (56.244708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.975542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.988416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.97225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.23175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.166083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.614292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.116916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.589833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.900792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.810166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.707291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.486083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.576709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.704625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.502833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (29.474083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (116.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-877000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.837167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (30.012834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-877000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-877000 -v 3 --alsologtostderr: exit status 83 (42.183875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-877000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-877000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:57:56.326837    8770 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:57:56.326986    8770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:56.326992    8770 out.go:304] Setting ErrFile to fd 2...
	I0729 16:57:56.326995    8770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:56.327137    8770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:57:56.327380    8770 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:57:56.327592    8770 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:57:56.332160    8770 out.go:177] * The control-plane node multinode-877000 host is not running: state=Stopped
	I0729 16:57:56.336182    8770 out.go:177]   To start a cluster, run: "minikube start -p multinode-877000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-877000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (29.848375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-877000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-877000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.016084ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-877000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-877000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-877000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (29.648667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-877000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-877000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-877000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-877000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (29.53375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status --output json --alsologtostderr: exit status 7 (30.365375ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-877000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:57:56.531506    8782 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:57:56.531643    8782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:56.531646    8782 out.go:304] Setting ErrFile to fd 2...
	I0729 16:57:56.531649    8782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:56.531762    8782 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:57:56.531882    8782 out.go:298] Setting JSON to true
	I0729 16:57:56.531896    8782 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:57:56.531951    8782 notify.go:220] Checking for updates...
	I0729 16:57:56.532101    8782 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:57:56.532107    8782 status.go:255] checking status of multinode-877000 ...
	I0729 16:57:56.532318    8782 status.go:330] multinode-877000 host status = "Stopped" (err=<nil>)
	I0729 16:57:56.532323    8782 status.go:343] host is not running, skipping remaining checks
	I0729 16:57:56.532326    8782 status.go:257] multinode-877000 status: &{Name:multinode-877000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-877000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (28.109625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 node stop m03: exit status 85 (46.645083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-877000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status: exit status 7 (29.852ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status --alsologtostderr: exit status 7 (29.587791ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:57:56.666580    8790 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:57:56.666710    8790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:56.666713    8790 out.go:304] Setting ErrFile to fd 2...
	I0729 16:57:56.666716    8790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:56.666867    8790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:57:56.666990    8790 out.go:298] Setting JSON to false
	I0729 16:57:56.666999    8790 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:57:56.667063    8790 notify.go:220] Checking for updates...
	I0729 16:57:56.667196    8790 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:57:56.667203    8790 status.go:255] checking status of multinode-877000 ...
	I0729 16:57:56.667405    8790 status.go:330] multinode-877000 host status = "Stopped" (err=<nil>)
	I0729 16:57:56.667408    8790 status.go:343] host is not running, skipping remaining checks
	I0729 16:57:56.667411    8790 status.go:257] multinode-877000 status: &{Name:multinode-877000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-877000 status --alsologtostderr": multinode-877000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (29.295667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (50.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 node start m03 -v=7 --alsologtostderr: exit status 85 (48.735708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:57:56.725679    8794 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:57:56.726126    8794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:56.726130    8794 out.go:304] Setting ErrFile to fd 2...
	I0729 16:57:56.726133    8794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:56.726292    8794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:57:56.726497    8794 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:57:56.726665    8794 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:57:56.731786    8794 out.go:177] 
	W0729 16:57:56.735656    8794 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0729 16:57:56.735661    8794 out.go:239] * 
	* 
	W0729 16:57:56.737553    8794 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:57:56.741733    8794 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0729 16:57:56.725679    8794 out.go:291] Setting OutFile to fd 1 ...
I0729 16:57:56.726126    8794 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:57:56.726130    8794 out.go:304] Setting ErrFile to fd 2...
I0729 16:57:56.726133    8794 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:57:56.726292    8794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
I0729 16:57:56.726497    8794 mustload.go:65] Loading cluster: multinode-877000
I0729 16:57:56.726665    8794 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:57:56.731786    8794 out.go:177] 
W0729 16:57:56.735656    8794 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0729 16:57:56.735661    8794 out.go:239] * 
* 
W0729 16:57:56.737553    8794 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 16:57:56.741733    8794 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-877000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr: exit status 7 (30.123041ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:57:56.775161    8796 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:57:56.775291    8796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:56.775295    8796 out.go:304] Setting ErrFile to fd 2...
	I0729 16:57:56.775297    8796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:56.775435    8796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:57:56.775561    8796 out.go:298] Setting JSON to false
	I0729 16:57:56.775568    8796 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:57:56.775621    8796 notify.go:220] Checking for updates...
	I0729 16:57:56.775780    8796 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:57:56.775786    8796 status.go:255] checking status of multinode-877000 ...
	I0729 16:57:56.776015    8796 status.go:330] multinode-877000 host status = "Stopped" (err=<nil>)
	I0729 16:57:56.776019    8796 status.go:343] host is not running, skipping remaining checks
	I0729 16:57:56.776021    8796 status.go:257] multinode-877000 status: &{Name:multinode-877000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr: exit status 7 (74.6515ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:57:58.206488    8798 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:57:58.206709    8798 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:58.206713    8798 out.go:304] Setting ErrFile to fd 2...
	I0729 16:57:58.206716    8798 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:58.206947    8798 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:57:58.207118    8798 out.go:298] Setting JSON to false
	I0729 16:57:58.207130    8798 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:57:58.207165    8798 notify.go:220] Checking for updates...
	I0729 16:57:58.207385    8798 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:57:58.207394    8798 status.go:255] checking status of multinode-877000 ...
	I0729 16:57:58.207676    8798 status.go:330] multinode-877000 host status = "Stopped" (err=<nil>)
	I0729 16:57:58.207681    8798 status.go:343] host is not running, skipping remaining checks
	I0729 16:57:58.207684    8798 status.go:257] multinode-877000 status: &{Name:multinode-877000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr: exit status 7 (75.370458ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:57:59.861238    8800 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:57:59.861463    8800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:59.861468    8800 out.go:304] Setting ErrFile to fd 2...
	I0729 16:57:59.861471    8800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:57:59.861666    8800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:57:59.861848    8800 out.go:298] Setting JSON to false
	I0729 16:57:59.861861    8800 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:57:59.861902    8800 notify.go:220] Checking for updates...
	I0729 16:57:59.862120    8800 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:57:59.862129    8800 status.go:255] checking status of multinode-877000 ...
	I0729 16:57:59.862447    8800 status.go:330] multinode-877000 host status = "Stopped" (err=<nil>)
	I0729 16:57:59.862452    8800 status.go:343] host is not running, skipping remaining checks
	I0729 16:57:59.862455    8800 status.go:257] multinode-877000 status: &{Name:multinode-877000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr: exit status 7 (72.644125ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:58:01.456951    8802 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:58:01.457135    8802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:01.457140    8802 out.go:304] Setting ErrFile to fd 2...
	I0729 16:58:01.457143    8802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:01.457324    8802 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:58:01.457481    8802 out.go:298] Setting JSON to false
	I0729 16:58:01.457493    8802 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:58:01.457533    8802 notify.go:220] Checking for updates...
	I0729 16:58:01.457759    8802 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:58:01.457768    8802 status.go:255] checking status of multinode-877000 ...
	I0729 16:58:01.458056    8802 status.go:330] multinode-877000 host status = "Stopped" (err=<nil>)
	I0729 16:58:01.458062    8802 status.go:343] host is not running, skipping remaining checks
	I0729 16:58:01.458065    8802 status.go:257] multinode-877000 status: &{Name:multinode-877000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr: exit status 7 (74.280584ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:58:03.825238    8806 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:58:03.825431    8806 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:03.825436    8806 out.go:304] Setting ErrFile to fd 2...
	I0729 16:58:03.825439    8806 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:03.825657    8806 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:58:03.825848    8806 out.go:298] Setting JSON to false
	I0729 16:58:03.825860    8806 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:58:03.825909    8806 notify.go:220] Checking for updates...
	I0729 16:58:03.826160    8806 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:58:03.826169    8806 status.go:255] checking status of multinode-877000 ...
	I0729 16:58:03.826452    8806 status.go:330] multinode-877000 host status = "Stopped" (err=<nil>)
	I0729 16:58:03.826457    8806 status.go:343] host is not running, skipping remaining checks
	I0729 16:58:03.826460    8806 status.go:257] multinode-877000 status: &{Name:multinode-877000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr: exit status 7 (74.8455ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:58:07.521260    8808 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:58:07.521479    8808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:07.521484    8808 out.go:304] Setting ErrFile to fd 2...
	I0729 16:58:07.521487    8808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:07.521700    8808 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:58:07.521868    8808 out.go:298] Setting JSON to false
	I0729 16:58:07.521882    8808 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:58:07.521949    8808 notify.go:220] Checking for updates...
	I0729 16:58:07.522161    8808 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:58:07.522170    8808 status.go:255] checking status of multinode-877000 ...
	I0729 16:58:07.522488    8808 status.go:330] multinode-877000 host status = "Stopped" (err=<nil>)
	I0729 16:58:07.522493    8808 status.go:343] host is not running, skipping remaining checks
	I0729 16:58:07.522496    8808 status.go:257] multinode-877000 status: &{Name:multinode-877000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr: exit status 7 (73.031542ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:58:16.494874    8813 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:58:16.495061    8813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:16.495065    8813 out.go:304] Setting ErrFile to fd 2...
	I0729 16:58:16.495068    8813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:16.495250    8813 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:58:16.495409    8813 out.go:298] Setting JSON to false
	I0729 16:58:16.495435    8813 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:58:16.495471    8813 notify.go:220] Checking for updates...
	I0729 16:58:16.495694    8813 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:58:16.495703    8813 status.go:255] checking status of multinode-877000 ...
	I0729 16:58:16.495974    8813 status.go:330] multinode-877000 host status = "Stopped" (err=<nil>)
	I0729 16:58:16.495979    8813 status.go:343] host is not running, skipping remaining checks
	I0729 16:58:16.495982    8813 status.go:257] multinode-877000 status: &{Name:multinode-877000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr: exit status 7 (73.088166ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:58:22.505028    8815 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:58:22.505218    8815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:22.505223    8815 out.go:304] Setting ErrFile to fd 2...
	I0729 16:58:22.505226    8815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:22.505383    8815 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:58:22.505552    8815 out.go:298] Setting JSON to false
	I0729 16:58:22.505564    8815 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:58:22.505607    8815 notify.go:220] Checking for updates...
	I0729 16:58:22.505814    8815 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:58:22.505823    8815 status.go:255] checking status of multinode-877000 ...
	I0729 16:58:22.506125    8815 status.go:330] multinode-877000 host status = "Stopped" (err=<nil>)
	I0729 16:58:22.506130    8815 status.go:343] host is not running, skipping remaining checks
	I0729 16:58:22.506133    8815 status.go:257] multinode-877000 status: &{Name:multinode-877000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr: exit status 7 (72.29975ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:58:47.125867    8822 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:58:47.126070    8822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:47.126075    8822 out.go:304] Setting ErrFile to fd 2...
	I0729 16:58:47.126078    8822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:47.126266    8822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:58:47.126431    8822 out.go:298] Setting JSON to false
	I0729 16:58:47.126443    8822 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:58:47.126478    8822 notify.go:220] Checking for updates...
	I0729 16:58:47.126690    8822 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:58:47.126699    8822 status.go:255] checking status of multinode-877000 ...
	I0729 16:58:47.126981    8822 status.go:330] multinode-877000 host status = "Stopped" (err=<nil>)
	I0729 16:58:47.126986    8822 status.go:343] host is not running, skipping remaining checks
	I0729 16:58:47.126989    8822 status.go:257] multinode-877000 status: &{Name:multinode-877000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-877000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (32.27175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (50.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-877000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-877000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-877000: (3.025003833s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-877000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-877000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.229261625s)

                                                
                                                
-- stdout --
	* [multinode-877000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-877000" primary control-plane node in "multinode-877000" cluster
	* Restarting existing qemu2 VM for "multinode-877000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-877000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:58:50.276898    8846 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:58:50.277106    8846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:50.277111    8846 out.go:304] Setting ErrFile to fd 2...
	I0729 16:58:50.277114    8846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:50.277296    8846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:58:50.278562    8846 out.go:298] Setting JSON to false
	I0729 16:58:50.298328    8846 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5297,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:58:50.298397    8846 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:58:50.303545    8846 out.go:177] * [multinode-877000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:58:50.310562    8846 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:58:50.310598    8846 notify.go:220] Checking for updates...
	I0729 16:58:50.318560    8846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:58:50.321515    8846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:58:50.324593    8846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:58:50.327510    8846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:58:50.330434    8846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:58:50.333798    8846 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:58:50.333852    8846 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:58:50.338463    8846 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:58:50.345512    8846 start.go:297] selected driver: qemu2
	I0729 16:58:50.345518    8846 start.go:901] validating driver "qemu2" against &{Name:multinode-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:58:50.345574    8846 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:58:50.348155    8846 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:58:50.348191    8846 cni.go:84] Creating CNI manager for ""
	I0729 16:58:50.348195    8846 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 16:58:50.348245    8846 start.go:340] cluster config:
	{Name:multinode-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-877000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:58:50.352060    8846 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:58:50.359461    8846 out.go:177] * Starting "multinode-877000" primary control-plane node in "multinode-877000" cluster
	I0729 16:58:50.367027    8846 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:58:50.367050    8846 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:58:50.367059    8846 cache.go:56] Caching tarball of preloaded images
	I0729 16:58:50.367132    8846 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:58:50.367138    8846 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:58:50.367188    8846 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/multinode-877000/config.json ...
	I0729 16:58:50.367690    8846 start.go:360] acquireMachinesLock for multinode-877000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:58:50.367730    8846 start.go:364] duration metric: took 32.417µs to acquireMachinesLock for "multinode-877000"
	I0729 16:58:50.367739    8846 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:58:50.367757    8846 fix.go:54] fixHost starting: 
	I0729 16:58:50.367891    8846 fix.go:112] recreateIfNeeded on multinode-877000: state=Stopped err=<nil>
	W0729 16:58:50.367903    8846 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:58:50.372492    8846 out.go:177] * Restarting existing qemu2 VM for "multinode-877000" ...
	I0729 16:58:50.380369    8846 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:58:50.380416    8846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:66:e7:0c:b5:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2
	I0729 16:58:50.382753    8846 main.go:141] libmachine: STDOUT: 
	I0729 16:58:50.382776    8846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:58:50.382809    8846 fix.go:56] duration metric: took 15.065833ms for fixHost
	I0729 16:58:50.382814    8846 start.go:83] releasing machines lock for "multinode-877000", held for 15.079292ms
	W0729 16:58:50.382822    8846 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:58:50.382851    8846 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:58:50.382856    8846 start.go:729] Will try again in 5 seconds ...
	I0729 16:58:55.385186    8846 start.go:360] acquireMachinesLock for multinode-877000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:58:55.385644    8846 start.go:364] duration metric: took 320.125µs to acquireMachinesLock for "multinode-877000"
	I0729 16:58:55.385788    8846 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:58:55.385810    8846 fix.go:54] fixHost starting: 
	I0729 16:58:55.386594    8846 fix.go:112] recreateIfNeeded on multinode-877000: state=Stopped err=<nil>
	W0729 16:58:55.386623    8846 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:58:55.394934    8846 out.go:177] * Restarting existing qemu2 VM for "multinode-877000" ...
	I0729 16:58:55.399095    8846 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:58:55.399347    8846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:66:e7:0c:b5:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2
	I0729 16:58:55.408640    8846 main.go:141] libmachine: STDOUT: 
	I0729 16:58:55.408694    8846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:58:55.408783    8846 fix.go:56] duration metric: took 22.976834ms for fixHost
	I0729 16:58:55.408799    8846 start.go:83] releasing machines lock for "multinode-877000", held for 23.13ms
	W0729 16:58:55.408948    8846 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-877000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-877000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:58:55.416127    8846 out.go:177] 
	W0729 16:58:55.420119    8846 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:58:55.420153    8846 out.go:239] * 
	* 
	W0729 16:58:55.422638    8846 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:58:55.431076    8846 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-877000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-877000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (33.275458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 node delete m03: exit status 83 (40.950459ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-877000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-877000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-877000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status --alsologtostderr: exit status 7 (30.038083ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:58:55.616473    8863 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:58:55.616628    8863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:55.616631    8863 out.go:304] Setting ErrFile to fd 2...
	I0729 16:58:55.616633    8863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:55.616770    8863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:58:55.616891    8863 out.go:298] Setting JSON to false
	I0729 16:58:55.616900    8863 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:58:55.616957    8863 notify.go:220] Checking for updates...
	I0729 16:58:55.617109    8863 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:58:55.617116    8863 status.go:255] checking status of multinode-877000 ...
	I0729 16:58:55.617314    8863 status.go:330] multinode-877000 host status = "Stopped" (err=<nil>)
	I0729 16:58:55.617319    8863 status.go:343] host is not running, skipping remaining checks
	I0729 16:58:55.617321    8863 status.go:257] multinode-877000 status: &{Name:multinode-877000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-877000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (30.049458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-877000 stop: (3.381737458s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status: exit status 7 (64.500416ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-877000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-877000 status --alsologtostderr: exit status 7 (32.74625ms)

                                                
                                                
-- stdout --
	multinode-877000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:58:59.126186    8887 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:58:59.126333    8887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:59.126337    8887 out.go:304] Setting ErrFile to fd 2...
	I0729 16:58:59.126339    8887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:59.126478    8887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:58:59.126603    8887 out.go:298] Setting JSON to false
	I0729 16:58:59.126612    8887 mustload.go:65] Loading cluster: multinode-877000
	I0729 16:58:59.126681    8887 notify.go:220] Checking for updates...
	I0729 16:58:59.126833    8887 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:58:59.126840    8887 status.go:255] checking status of multinode-877000 ...
	I0729 16:58:59.127050    8887 status.go:330] multinode-877000 host status = "Stopped" (err=<nil>)
	I0729 16:58:59.127054    8887 status.go:343] host is not running, skipping remaining checks
	I0729 16:58:59.127057    8887 status.go:257] multinode-877000 status: &{Name:multinode-877000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-877000 status --alsologtostderr": multinode-877000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-877000 status --alsologtostderr": multinode-877000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (29.80325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-877000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-877000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.179456041s)

                                                
                                                
-- stdout --
	* [multinode-877000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-877000" primary control-plane node in "multinode-877000" cluster
	* Restarting existing qemu2 VM for "multinode-877000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-877000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:58:59.185794    8891 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:58:59.185945    8891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:59.185948    8891 out.go:304] Setting ErrFile to fd 2...
	I0729 16:58:59.185951    8891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:58:59.186098    8891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:58:59.187112    8891 out.go:298] Setting JSON to false
	I0729 16:58:59.203422    8891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5306,"bootTime":1722292233,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:58:59.203502    8891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:58:59.209017    8891 out.go:177] * [multinode-877000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:58:59.214891    8891 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:58:59.214963    8891 notify.go:220] Checking for updates...
	I0729 16:58:59.221945    8891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:58:59.224984    8891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:58:59.227974    8891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:58:59.230993    8891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:58:59.233974    8891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:58:59.237271    8891 config.go:182] Loaded profile config "multinode-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:58:59.237560    8891 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:58:59.241969    8891 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:58:59.248956    8891 start.go:297] selected driver: qemu2
	I0729 16:58:59.248964    8891 start.go:901] validating driver "qemu2" against &{Name:multinode-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:58:59.249058    8891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:58:59.251348    8891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:58:59.251410    8891 cni.go:84] Creating CNI manager for ""
	I0729 16:58:59.251416    8891 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 16:58:59.251462    8891 start.go:340] cluster config:
	{Name:multinode-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-877000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:58:59.255259    8891 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:58:59.262894    8891 out.go:177] * Starting "multinode-877000" primary control-plane node in "multinode-877000" cluster
	I0729 16:58:59.267000    8891 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:58:59.267019    8891 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:58:59.267028    8891 cache.go:56] Caching tarball of preloaded images
	I0729 16:58:59.267085    8891 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:58:59.267091    8891 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:58:59.267151    8891 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/multinode-877000/config.json ...
	I0729 16:58:59.267619    8891 start.go:360] acquireMachinesLock for multinode-877000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:58:59.267648    8891 start.go:364] duration metric: took 23.375µs to acquireMachinesLock for "multinode-877000"
	I0729 16:58:59.267656    8891 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:58:59.267663    8891 fix.go:54] fixHost starting: 
	I0729 16:58:59.267782    8891 fix.go:112] recreateIfNeeded on multinode-877000: state=Stopped err=<nil>
	W0729 16:58:59.267790    8891 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:58:59.271997    8891 out.go:177] * Restarting existing qemu2 VM for "multinode-877000" ...
	I0729 16:58:59.279996    8891 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:58:59.280032    8891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:66:e7:0c:b5:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2
	I0729 16:58:59.282076    8891 main.go:141] libmachine: STDOUT: 
	I0729 16:58:59.282094    8891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:58:59.282122    8891 fix.go:56] duration metric: took 14.459667ms for fixHost
	I0729 16:58:59.282128    8891 start.go:83] releasing machines lock for "multinode-877000", held for 14.475458ms
	W0729 16:58:59.282135    8891 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:58:59.282173    8891 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:58:59.282178    8891 start.go:729] Will try again in 5 seconds ...
	I0729 16:59:04.284406    8891 start.go:360] acquireMachinesLock for multinode-877000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:59:04.284935    8891 start.go:364] duration metric: took 400.5µs to acquireMachinesLock for "multinode-877000"
	I0729 16:59:04.285089    8891 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:59:04.285112    8891 fix.go:54] fixHost starting: 
	I0729 16:59:04.285816    8891 fix.go:112] recreateIfNeeded on multinode-877000: state=Stopped err=<nil>
	W0729 16:59:04.285842    8891 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:59:04.290386    8891 out.go:177] * Restarting existing qemu2 VM for "multinode-877000" ...
	I0729 16:59:04.294314    8891 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:59:04.294543    8891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:66:e7:0c:b5:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/multinode-877000/disk.qcow2
	I0729 16:59:04.303891    8891 main.go:141] libmachine: STDOUT: 
	I0729 16:59:04.303957    8891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:59:04.304045    8891 fix.go:56] duration metric: took 18.938708ms for fixHost
	I0729 16:59:04.304066    8891 start.go:83] releasing machines lock for "multinode-877000", held for 19.105584ms
	W0729 16:59:04.304259    8891 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-877000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-877000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:59:04.310334    8891 out.go:177] 
	W0729 16:59:04.314388    8891 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:59:04.314412    8891 out.go:239] * 
	* 
	W0729 16:59:04.317022    8891 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:59:04.324283    8891 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-877000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (68.902958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-877000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-877000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-877000-m01 --driver=qemu2 : exit status 80 (10.153572709s)

                                                
                                                
-- stdout --
	* [multinode-877000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-877000-m01" primary control-plane node in "multinode-877000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-877000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-877000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-877000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-877000-m02 --driver=qemu2 : exit status 80 (10.101450833s)

                                                
                                                
-- stdout --
	* [multinode-877000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-877000-m02" primary control-plane node in "multinode-877000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-877000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-877000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-877000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-877000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-877000: exit status 83 (79.494125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-877000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-877000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-877000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-877000 -n multinode-877000: exit status 7 (30.889291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.48s)

                                                
                                    
x
+
TestPreload (10.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-675000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-675000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.865299s)

                                                
                                                
-- stdout --
	* [test-preload-675000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-675000" primary control-plane node in "test-preload-675000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-675000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:59:25.027194    8948 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:59:25.027337    8948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:59:25.027340    8948 out.go:304] Setting ErrFile to fd 2...
	I0729 16:59:25.027343    8948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:59:25.027485    8948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:59:25.028585    8948 out.go:298] Setting JSON to false
	I0729 16:59:25.044909    8948 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5332,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:59:25.044982    8948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:59:25.050826    8948 out.go:177] * [test-preload-675000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:59:25.057794    8948 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:59:25.057888    8948 notify.go:220] Checking for updates...
	I0729 16:59:25.064815    8948 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:59:25.067873    8948 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:59:25.071589    8948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:59:25.074784    8948 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:59:25.077841    8948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:59:25.081154    8948 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:59:25.081215    8948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:59:25.085759    8948 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:59:25.092825    8948 start.go:297] selected driver: qemu2
	I0729 16:59:25.092832    8948 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:59:25.092848    8948 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:59:25.095136    8948 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:59:25.098755    8948 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:59:25.101881    8948 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:59:25.101912    8948 cni.go:84] Creating CNI manager for ""
	I0729 16:59:25.101921    8948 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:59:25.101926    8948 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:59:25.101958    8948 start.go:340] cluster config:
	{Name:test-preload-675000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-675000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:59:25.106098    8948 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:59:25.114768    8948 out.go:177] * Starting "test-preload-675000" primary control-plane node in "test-preload-675000" cluster
	I0729 16:59:25.118800    8948 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0729 16:59:25.118900    8948 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/test-preload-675000/config.json ...
	I0729 16:59:25.118927    8948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/test-preload-675000/config.json: {Name:mk816bd8205edc7f785edb5f061c3593a0aa33b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:59:25.118939    8948 cache.go:107] acquiring lock: {Name:mke00dafbbc7efe9c124c54d8e3aaae3232df4f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:59:25.118947    8948 cache.go:107] acquiring lock: {Name:mkd540638f8fb3d62ecae405742a46d4337e2484 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:59:25.118956    8948 cache.go:107] acquiring lock: {Name:mk360face91a8670212791eb10142321cfd98801 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:59:25.119120    8948 cache.go:107] acquiring lock: {Name:mkc19745b343a6f15e8c885e4aa664337f7cc623 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:59:25.119184    8948 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 16:59:25.119224    8948 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 16:59:25.119232    8948 cache.go:107] acquiring lock: {Name:mkb38cf3b55d370d0766e3ad3641cadbf1de697e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:59:25.119270    8948 cache.go:107] acquiring lock: {Name:mk8ccbfbb68a3d81c8f2dcef51821b8ad2563b33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:59:25.119294    8948 cache.go:107] acquiring lock: {Name:mkeff9facdf823333df0eb06d5f80d42e3b3d9a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:59:25.119312    8948 cache.go:107] acquiring lock: {Name:mk3da4d54ec06d29c7eaf81dfc7b817dc4d37171 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:59:25.119248    8948 start.go:360] acquireMachinesLock for test-preload-675000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:59:25.119224    8948 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:59:25.119472    8948 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:59:25.119476    8948 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:59:25.119488    8948 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 16:59:25.119521    8948 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 16:59:25.119561    8948 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 16:59:25.119578    8948 start.go:364] duration metric: took 248.416µs to acquireMachinesLock for "test-preload-675000"
	I0729 16:59:25.119591    8948 start.go:93] Provisioning new machine with config: &{Name:test-preload-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-675000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:59:25.119649    8948 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:59:25.130736    8948 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:59:25.133820    8948 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 16:59:25.135015    8948 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 16:59:25.135005    8948 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:59:25.135052    8948 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 16:59:25.136579    8948 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 16:59:25.136648    8948 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:59:25.136681    8948 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:59:25.136722    8948 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 16:59:25.150713    8948 start.go:159] libmachine.API.Create for "test-preload-675000" (driver="qemu2")
	I0729 16:59:25.150735    8948 client.go:168] LocalClient.Create starting
	I0729 16:59:25.150827    8948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 16:59:25.150859    8948 main.go:141] libmachine: Decoding PEM data...
	I0729 16:59:25.150870    8948 main.go:141] libmachine: Parsing certificate...
	I0729 16:59:25.150906    8948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 16:59:25.150931    8948 main.go:141] libmachine: Decoding PEM data...
	I0729 16:59:25.150942    8948 main.go:141] libmachine: Parsing certificate...
	I0729 16:59:25.151342    8948 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:59:25.302930    8948 main.go:141] libmachine: Creating SSH key...
	I0729 16:59:25.370538    8948 main.go:141] libmachine: Creating Disk image...
	I0729 16:59:25.370564    8948 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:59:25.370821    8948 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/disk.qcow2
	I0729 16:59:25.380890    8948 main.go:141] libmachine: STDOUT: 
	I0729 16:59:25.380914    8948 main.go:141] libmachine: STDERR: 
	I0729 16:59:25.380968    8948 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/disk.qcow2 +20000M
	I0729 16:59:25.390124    8948 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:59:25.390150    8948 main.go:141] libmachine: STDERR: 
	I0729 16:59:25.390166    8948 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/disk.qcow2
	I0729 16:59:25.390169    8948 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:59:25.390181    8948 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:59:25.390214    8948 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:60:fc:cb:62:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/disk.qcow2
	I0729 16:59:25.392156    8948 main.go:141] libmachine: STDOUT: 
	I0729 16:59:25.392174    8948 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:59:25.392192    8948 client.go:171] duration metric: took 241.452792ms to LocalClient.Create
	W0729 16:59:25.521858    8948 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 16:59:25.521886    8948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 16:59:25.544206    8948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0729 16:59:25.553186    8948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0729 16:59:25.571790    8948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 16:59:25.575129    8948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0729 16:59:25.609600    8948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 16:59:25.667232    8948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 16:59:25.840227    8948 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0729 16:59:25.840296    8948 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 721.191875ms
	I0729 16:59:25.840350    8948 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0729 16:59:26.054160    8948 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 16:59:26.054239    8948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 16:59:26.274558    8948 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 16:59:26.274609    8948 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.155663167s
	I0729 16:59:26.274637    8948 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 16:59:27.392512    8948 start.go:128] duration metric: took 2.272840209s to createHost
	I0729 16:59:27.392564    8948 start.go:83] releasing machines lock for "test-preload-675000", held for 2.272973959s
	W0729 16:59:27.392629    8948 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:59:27.408777    8948 out.go:177] * Deleting "test-preload-675000" in qemu2 ...
	I0729 16:59:27.417485    8948 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0729 16:59:27.417531    8948 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.298295375s
	I0729 16:59:27.417558    8948 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	W0729 16:59:27.439710    8948 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:59:27.439744    8948 start.go:729] Will try again in 5 seconds ...
	I0729 16:59:28.605435    8948 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0729 16:59:28.605502    8948 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.486555625s
	I0729 16:59:28.605532    8948 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0729 16:59:29.370326    8948 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0729 16:59:29.370374    8948 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.251105792s
	I0729 16:59:29.370401    8948 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0729 16:59:30.409993    8948 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0729 16:59:30.410072    8948 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.291130625s
	I0729 16:59:30.410105    8948 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0729 16:59:31.102934    8948 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0729 16:59:31.102989    8948 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.983706708s
	I0729 16:59:31.103059    8948 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0729 16:59:32.440219    8948 start.go:360] acquireMachinesLock for test-preload-675000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:59:32.440648    8948 start.go:364] duration metric: took 358.167µs to acquireMachinesLock for "test-preload-675000"
	I0729 16:59:32.440780    8948 start.go:93] Provisioning new machine with config: &{Name:test-preload-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-675000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:59:32.440987    8948 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:59:32.452569    8948 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:59:32.501804    8948 start.go:159] libmachine.API.Create for "test-preload-675000" (driver="qemu2")
	I0729 16:59:32.501870    8948 client.go:168] LocalClient.Create starting
	I0729 16:59:32.501980    8948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 16:59:32.502041    8948 main.go:141] libmachine: Decoding PEM data...
	I0729 16:59:32.502060    8948 main.go:141] libmachine: Parsing certificate...
	I0729 16:59:32.502119    8948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 16:59:32.502162    8948 main.go:141] libmachine: Decoding PEM data...
	I0729 16:59:32.502175    8948 main.go:141] libmachine: Parsing certificate...
	I0729 16:59:32.502715    8948 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:59:32.678646    8948 main.go:141] libmachine: Creating SSH key...
	I0729 16:59:32.794890    8948 main.go:141] libmachine: Creating Disk image...
	I0729 16:59:32.794897    8948 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:59:32.795098    8948 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/disk.qcow2
	I0729 16:59:32.804576    8948 main.go:141] libmachine: STDOUT: 
	I0729 16:59:32.804591    8948 main.go:141] libmachine: STDERR: 
	I0729 16:59:32.804661    8948 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/disk.qcow2 +20000M
	I0729 16:59:32.812742    8948 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:59:32.812768    8948 main.go:141] libmachine: STDERR: 
	I0729 16:59:32.812780    8948 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/disk.qcow2
	I0729 16:59:32.812787    8948 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:59:32.812794    8948 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:59:32.812836    8948 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:de:d5:31:19:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/test-preload-675000/disk.qcow2
	I0729 16:59:32.814557    8948 main.go:141] libmachine: STDOUT: 
	I0729 16:59:32.814573    8948 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:59:32.814585    8948 client.go:171] duration metric: took 312.711042ms to LocalClient.Create
	I0729 16:59:32.825814    8948 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0729 16:59:32.825828    8948 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 7.706645083s
	I0729 16:59:32.825834    8948 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0729 16:59:32.825849    8948 cache.go:87] Successfully saved all images to host disk.
	I0729 16:59:34.816901    8948 start.go:128] duration metric: took 2.375851375s to createHost
	I0729 16:59:34.816961    8948 start.go:83] releasing machines lock for "test-preload-675000", held for 2.376290709s
	W0729 16:59:34.817284    8948 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-675000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-675000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:59:34.829810    8948 out.go:177] 
	W0729 16:59:34.833971    8948 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:59:34.833996    8948 out.go:239] * 
	* 
	W0729 16:59:34.836774    8948 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:59:34.848902    8948 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-675000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-29 16:59:34.867476 -0700 PDT m=+732.173796585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-675000 -n test-preload-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-675000 -n test-preload-675000: exit status 7 (64.373542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-675000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-675000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-675000
--- FAIL: TestPreload (10.01s)

                                                
                                    
x
+
TestScheduledStopUnix (10s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-655000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-655000 --memory=2048 --driver=qemu2 : exit status 80 (9.849206125s)

                                                
                                                
-- stdout --
	* [scheduled-stop-655000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-655000" primary control-plane node in "scheduled-stop-655000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-655000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-655000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-655000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-655000" primary control-plane node in "scheduled-stop-655000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-655000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-655000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-29 16:59:44.856666 -0700 PDT m=+742.162992835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-655000 -n scheduled-stop-655000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-655000 -n scheduled-stop-655000: exit status 7 (69.887208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-655000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-655000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-655000
--- FAIL: TestScheduledStopUnix (10.00s)

                                                
                                    
x
+
TestSkaffold (12.19s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2139089664 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2139089664 version: (1.063788625s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-626000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-626000 --memory=2600 --driver=qemu2 : exit status 80 (9.86111775s)

                                                
                                                
-- stdout --
	* [skaffold-626000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-626000" primary control-plane node in "skaffold-626000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-626000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-626000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-626000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-626000" primary control-plane node in "skaffold-626000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-626000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-626000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-29 16:59:57.050175 -0700 PDT m=+754.356508335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-626000 -n skaffold-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-626000 -n skaffold-626000: exit status 7 (63.06125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-626000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-626000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-626000
--- FAIL: TestSkaffold (12.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (653.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2175575691 start -p running-upgrade-449000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2175575691 start -p running-upgrade-449000 --memory=2200 --vm-driver=qemu2 : (1m0.576246042s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-449000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-449000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9m19.06161375s)

                                                
                                                
-- stdout --
	* [running-upgrade-449000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-449000" primary control-plane node in "running-upgrade-449000" cluster
	* Updating the running qemu2 "running-upgrade-449000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:01:19.673416    9524 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:01:19.673551    9524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:01:19.673554    9524 out.go:304] Setting ErrFile to fd 2...
	I0729 17:01:19.673556    9524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:01:19.673704    9524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:01:19.674815    9524 out.go:298] Setting JSON to false
	I0729 17:01:19.691725    9524 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5446,"bootTime":1722292233,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:01:19.691821    9524 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:01:19.696300    9524 out.go:177] * [running-upgrade-449000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:01:19.703291    9524 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:01:19.703370    9524 notify.go:220] Checking for updates...
	I0729 17:01:19.711310    9524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:01:19.715311    9524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:01:19.718270    9524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:01:19.721276    9524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:01:19.724252    9524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:01:19.727562    9524 config.go:182] Loaded profile config "running-upgrade-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 17:01:19.730235    9524 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 17:01:19.733232    9524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:01:19.737285    9524 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 17:01:19.744250    9524 start.go:297] selected driver: qemu2
	I0729 17:01:19.744256    9524 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51331 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 17:01:19.744308    9524 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:01:19.746649    9524 cni.go:84] Creating CNI manager for ""
	I0729 17:01:19.746665    9524 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:01:19.746691    9524 start.go:340] cluster config:
	{Name:running-upgrade-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51331 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 17:01:19.746733    9524 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:01:19.754229    9524 out.go:177] * Starting "running-upgrade-449000" primary control-plane node in "running-upgrade-449000" cluster
	I0729 17:01:19.758206    9524 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 17:01:19.758218    9524 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 17:01:19.758225    9524 cache.go:56] Caching tarball of preloaded images
	I0729 17:01:19.758280    9524 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:01:19.758284    9524 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 17:01:19.758325    9524 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/config.json ...
	I0729 17:01:19.758645    9524 start.go:360] acquireMachinesLock for running-upgrade-449000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:01:32.256462    9524 start.go:364] duration metric: took 12.497811542s to acquireMachinesLock for "running-upgrade-449000"
	I0729 17:01:32.256622    9524 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:01:32.256669    9524 fix.go:54] fixHost starting: 
	I0729 17:01:32.257678    9524 fix.go:112] recreateIfNeeded on running-upgrade-449000: state=Running err=<nil>
	W0729 17:01:32.257688    9524 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:01:32.263798    9524 out.go:177] * Updating the running qemu2 "running-upgrade-449000" VM ...
	I0729 17:01:32.270790    9524 machine.go:94] provisionDockerMachine start ...
	I0729 17:01:32.270870    9524 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:32.271015    9524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052a6a10] 0x1052a9270 <nil>  [] 0s} localhost 51264 <nil> <nil>}
	I0729 17:01:32.271020    9524 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 17:01:32.331609    9524 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-449000
	
	I0729 17:01:32.331626    9524 buildroot.go:166] provisioning hostname "running-upgrade-449000"
	I0729 17:01:32.331683    9524 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:32.331811    9524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052a6a10] 0x1052a9270 <nil>  [] 0s} localhost 51264 <nil> <nil>}
	I0729 17:01:32.331818    9524 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-449000 && echo "running-upgrade-449000" | sudo tee /etc/hostname
	I0729 17:01:32.396325    9524 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-449000
	
	I0729 17:01:32.396383    9524 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:32.396524    9524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052a6a10] 0x1052a9270 <nil>  [] 0s} localhost 51264 <nil> <nil>}
	I0729 17:01:32.396532    9524 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-449000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-449000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-449000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:01:32.455220    9524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:01:32.455234    9524 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19346-7076/.minikube CaCertPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19346-7076/.minikube}
	I0729 17:01:32.455243    9524 buildroot.go:174] setting up certificates
	I0729 17:01:32.455248    9524 provision.go:84] configureAuth start
	I0729 17:01:32.455257    9524 provision.go:143] copyHostCerts
	I0729 17:01:32.455327    9524 exec_runner.go:144] found /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.pem, removing ...
	I0729 17:01:32.455338    9524 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.pem
	I0729 17:01:32.455469    9524 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.pem (1082 bytes)
	I0729 17:01:32.455662    9524 exec_runner.go:144] found /Users/jenkins/minikube-integration/19346-7076/.minikube/cert.pem, removing ...
	I0729 17:01:32.455666    9524 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19346-7076/.minikube/cert.pem
	I0729 17:01:32.455715    9524 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19346-7076/.minikube/cert.pem (1123 bytes)
	I0729 17:01:32.455820    9524 exec_runner.go:144] found /Users/jenkins/minikube-integration/19346-7076/.minikube/key.pem, removing ...
	I0729 17:01:32.455824    9524 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19346-7076/.minikube/key.pem
	I0729 17:01:32.455864    9524 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19346-7076/.minikube/key.pem (1679 bytes)
	I0729 17:01:32.455945    9524 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-449000 san=[127.0.0.1 localhost minikube running-upgrade-449000]
	I0729 17:01:32.694785    9524 provision.go:177] copyRemoteCerts
	I0729 17:01:32.694832    9524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:01:32.694840    9524 sshutil.go:53] new ssh client: &{IP:localhost Port:51264 SSHKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/running-upgrade-449000/id_rsa Username:docker}
	I0729 17:01:32.727707    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 17:01:32.734708    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 17:01:32.741766    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 17:01:32.748924    9524 provision.go:87] duration metric: took 293.671583ms to configureAuth
	I0729 17:01:32.748931    9524 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:01:32.749046    9524 config.go:182] Loaded profile config "running-upgrade-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 17:01:32.749094    9524 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:32.749187    9524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052a6a10] 0x1052a9270 <nil>  [] 0s} localhost 51264 <nil> <nil>}
	I0729 17:01:32.749191    9524 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 17:01:32.804975    9524 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 17:01:32.804984    9524 buildroot.go:70] root file system type: tmpfs
	I0729 17:01:32.805040    9524 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 17:01:32.805086    9524 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:32.805202    9524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052a6a10] 0x1052a9270 <nil>  [] 0s} localhost 51264 <nil> <nil>}
	I0729 17:01:32.805238    9524 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 17:01:32.865989    9524 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 17:01:32.866047    9524 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:32.866172    9524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052a6a10] 0x1052a9270 <nil>  [] 0s} localhost 51264 <nil> <nil>}
	I0729 17:01:32.866185    9524 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 17:01:32.924177    9524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:01:32.924188    9524 machine.go:97] duration metric: took 653.3925ms to provisionDockerMachine
	I0729 17:01:32.924194    9524 start.go:293] postStartSetup for "running-upgrade-449000" (driver="qemu2")
	I0729 17:01:32.924212    9524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:01:32.924274    9524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:01:32.924282    9524 sshutil.go:53] new ssh client: &{IP:localhost Port:51264 SSHKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/running-upgrade-449000/id_rsa Username:docker}
	I0729 17:01:32.954397    9524 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:01:32.955657    9524 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 17:01:32.955664    9524 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19346-7076/.minikube/addons for local assets ...
	I0729 17:01:32.955731    9524 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19346-7076/.minikube/files for local assets ...
	I0729 17:01:32.955817    9524 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19346-7076/.minikube/files/etc/ssl/certs/75652.pem -> 75652.pem in /etc/ssl/certs
	I0729 17:01:32.955910    9524 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:01:32.958802    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/files/etc/ssl/certs/75652.pem --> /etc/ssl/certs/75652.pem (1708 bytes)
	I0729 17:01:32.965943    9524 start.go:296] duration metric: took 41.734458ms for postStartSetup
	I0729 17:01:32.965958    9524 fix.go:56] duration metric: took 709.300917ms for fixHost
	I0729 17:01:32.965993    9524 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:32.966104    9524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052a6a10] 0x1052a9270 <nil>  [] 0s} localhost 51264 <nil> <nil>}
	I0729 17:01:32.966108    9524 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 17:01:33.021258    9524 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722297693.206457653
	
	I0729 17:01:33.021267    9524 fix.go:216] guest clock: 1722297693.206457653
	I0729 17:01:33.021271    9524 fix.go:229] Guest: 2024-07-29 17:01:33.206457653 -0700 PDT Remote: 2024-07-29 17:01:32.96596 -0700 PDT m=+13.313444918 (delta=240.497653ms)
	I0729 17:01:33.021281    9524 fix.go:200] guest clock delta is within tolerance: 240.497653ms
	I0729 17:01:33.021283    9524 start.go:83] releasing machines lock for "running-upgrade-449000", held for 764.72425ms
	I0729 17:01:33.021352    9524 ssh_runner.go:195] Run: cat /version.json
	I0729 17:01:33.021360    9524 sshutil.go:53] new ssh client: &{IP:localhost Port:51264 SSHKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/running-upgrade-449000/id_rsa Username:docker}
	I0729 17:01:33.021364    9524 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:01:33.021399    9524 sshutil.go:53] new ssh client: &{IP:localhost Port:51264 SSHKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/running-upgrade-449000/id_rsa Username:docker}
	W0729 17:01:33.021955    9524 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51264: connect: connection refused
	I0729 17:01:33.021976    9524 retry.go:31] will retry after 362.040869ms: dial tcp [::1]:51264: connect: connection refused
	W0729 17:01:33.052999    9524 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 17:01:33.053046    9524 ssh_runner.go:195] Run: systemctl --version
	I0729 17:01:33.054849    9524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:01:33.056397    9524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:01:33.056434    9524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 17:01:33.059377    9524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 17:01:33.063541    9524 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:01:33.063550    9524 start.go:495] detecting cgroup driver to use...
	I0729 17:01:33.063621    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:01:33.068737    9524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 17:01:33.072181    9524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 17:01:33.075124    9524 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 17:01:33.075149    9524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 17:01:33.077992    9524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 17:01:33.081551    9524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 17:01:33.085009    9524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 17:01:33.088287    9524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:01:33.091060    9524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 17:01:33.093979    9524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 17:01:33.097385    9524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 17:01:33.100499    9524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:01:33.103057    9524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:01:33.106326    9524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:01:33.208299    9524 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 17:01:33.219579    9524 start.go:495] detecting cgroup driver to use...
	I0729 17:01:33.219649    9524 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 17:01:33.236055    9524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:01:33.241281    9524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:01:33.246793    9524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:01:33.251472    9524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 17:01:33.256185    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:01:33.261435    9524 ssh_runner.go:195] Run: which cri-dockerd
	I0729 17:01:33.262766    9524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 17:01:33.265860    9524 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 17:01:33.271013    9524 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 17:01:33.373208    9524 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 17:01:33.474761    9524 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 17:01:33.474824    9524 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 17:01:33.480739    9524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:01:33.584893    9524 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 17:01:50.196820    9524 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.611915875s)
	I0729 17:01:50.196897    9524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 17:01:50.202299    9524 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 17:01:50.210493    9524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 17:01:50.216073    9524 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 17:01:50.301679    9524 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 17:01:50.384068    9524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:01:50.465407    9524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 17:01:50.471460    9524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 17:01:50.475997    9524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:01:50.565638    9524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 17:01:50.606713    9524 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 17:01:50.606782    9524 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 17:01:50.610081    9524 start.go:563] Will wait 60s for crictl version
	I0729 17:01:50.610148    9524 ssh_runner.go:195] Run: which crictl
	I0729 17:01:50.611504    9524 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:01:50.624310    9524 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 17:01:50.624380    9524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 17:01:50.637402    9524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 17:01:50.653316    9524 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 17:01:50.653437    9524 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 17:01:50.654747    9524 kubeadm.go:883] updating cluster {Name:running-upgrade-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51331 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 17:01:50.654798    9524 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 17:01:50.654838    9524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 17:01:50.665882    9524 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 17:01:50.665890    9524 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 17:01:50.665939    9524 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 17:01:50.669692    9524 ssh_runner.go:195] Run: which lz4
	I0729 17:01:50.671066    9524 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 17:01:50.672422    9524 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 17:01:50.672431    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 17:01:51.673306    9524 docker.go:649] duration metric: took 1.002278041s to copy over tarball
	I0729 17:01:51.673363    9524 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 17:01:52.963457    9524 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.29007675s)
	I0729 17:01:52.963475    9524 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 17:01:52.983171    9524 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 17:01:52.987434    9524 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 17:01:52.993563    9524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:01:53.089250    9524 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 17:02:14.622810    9524 ssh_runner.go:235] Completed: sudo systemctl restart docker: (21.533556s)
	I0729 17:02:14.622921    9524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 17:02:14.658475    9524 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 17:02:14.658484    9524 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 17:02:14.658489    9524 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 17:02:14.662555    9524 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:02:14.663705    9524 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 17:02:14.666074    9524 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 17:02:14.666546    9524 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:02:14.668765    9524 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 17:02:14.669010    9524 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 17:02:14.670521    9524 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 17:02:14.670547    9524 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 17:02:14.672553    9524 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 17:02:14.672621    9524 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 17:02:14.674607    9524 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 17:02:14.674628    9524 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 17:02:14.678866    9524 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 17:02:14.678860    9524 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 17:02:14.681115    9524 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 17:02:14.683099    9524 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 17:02:15.089439    9524 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 17:02:15.114701    9524 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 17:02:15.114738    9524 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 17:02:15.114744    9524 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 17:02:15.114765    9524 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 17:02:15.120723    9524 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 17:02:15.122111    9524 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	W0729 17:02:15.127017    9524 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 17:02:15.127160    9524 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 17:02:15.145900    9524 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 17:02:15.186414    9524 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 17:02:15.194844    9524 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 17:02:15.194862    9524 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 17:02:15.194879    9524 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 17:02:15.194918    9524 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 17:02:15.250881    9524 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 17:02:15.250902    9524 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 17:02:15.250958    9524 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 17:02:15.250987    9524 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 17:02:15.251000    9524 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 17:02:15.251008    9524 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 17:02:15.251018    9524 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 17:02:15.251030    9524 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 17:02:15.251035    9524 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 17:02:15.259124    9524 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 17:02:15.259144    9524 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 17:02:15.259199    9524 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 17:02:15.268367    9524 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0729 17:02:15.270155    9524 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 17:02:15.270276    9524 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:02:15.307625    9524 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 17:02:15.307645    9524 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 17:02:15.307751    9524 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 17:02:15.309058    9524 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 17:02:15.309109    9524 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 17:02:15.325104    9524 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 17:02:15.331559    9524 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 17:02:15.331591    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 17:02:15.331624    9524 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0729 17:02:15.331631    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0729 17:02:15.331789    9524 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 17:02:15.331813    9524 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:02:15.331852    9524 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:02:15.374563    9524 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 17:02:15.374696    9524 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 17:02:15.389617    9524 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 17:02:15.389650    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 17:02:15.458301    9524 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 17:02:15.458325    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 17:02:15.627054    9524 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 17:02:15.627076    9524 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 17:02:15.627083    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 17:02:15.992409    9524 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 17:02:15.992433    9524 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 17:02:15.992441    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0729 17:02:16.131521    9524 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 17:02:16.131564    9524 cache_images.go:92] duration metric: took 1.4730685s to LoadCachedImages
	W0729 17:02:16.131605    9524 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0729 17:02:16.131610    9524 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 17:02:16.131660    9524 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-449000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:02:16.131725    9524 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 17:02:16.144928    9524 cni.go:84] Creating CNI manager for ""
	I0729 17:02:16.144941    9524 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:02:16.144947    9524 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 17:02:16.144955    9524 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-449000 NodeName:running-upgrade-449000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 17:02:16.145016    9524 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-449000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 17:02:16.145071    9524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 17:02:16.148426    9524 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 17:02:16.148457    9524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 17:02:16.151541    9524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 17:02:16.156642    9524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:02:16.161926    9524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 17:02:16.166964    9524 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 17:02:16.168261    9524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:02:16.264375    9524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:02:16.269718    9524 certs.go:68] Setting up /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000 for IP: 10.0.2.15
	I0729 17:02:16.269724    9524 certs.go:194] generating shared ca certs ...
	I0729 17:02:16.269732    9524 certs.go:226] acquiring lock for ca certs: {Name:mk1e3a56a4c4fc5577b9072afde2d071febb00e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:02:16.269881    9524 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.key
	I0729 17:02:16.269915    9524 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/proxy-client-ca.key
	I0729 17:02:16.269922    9524 certs.go:256] generating profile certs ...
	I0729 17:02:16.270008    9524 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/client.key
	I0729 17:02:16.270029    9524 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/apiserver.key.6157126e
	I0729 17:02:16.270039    9524 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/apiserver.crt.6157126e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 17:02:16.339003    9524 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/apiserver.crt.6157126e ...
	I0729 17:02:16.339009    9524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/apiserver.crt.6157126e: {Name:mkafb1fafbd047ed06ba9fbb04431e65978da7fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:02:16.339531    9524 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/apiserver.key.6157126e ...
	I0729 17:02:16.339536    9524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/apiserver.key.6157126e: {Name:mkd0dbbf40afb729571d0b909ac4a368c106abb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:02:16.339676    9524 certs.go:381] copying /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/apiserver.crt.6157126e -> /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/apiserver.crt
	I0729 17:02:16.339819    9524 certs.go:385] copying /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/apiserver.key.6157126e -> /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/apiserver.key
	I0729 17:02:16.339955    9524 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/proxy-client.key
	I0729 17:02:16.340076    9524 certs.go:484] found cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/7565.pem (1338 bytes)
	W0729 17:02:16.340097    9524 certs.go:480] ignoring /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/7565_empty.pem, impossibly tiny 0 bytes
	I0729 17:02:16.340102    9524 certs.go:484] found cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 17:02:16.340121    9524 certs.go:484] found cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem (1082 bytes)
	I0729 17:02:16.340138    9524 certs.go:484] found cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:02:16.340155    9524 certs.go:484] found cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/key.pem (1679 bytes)
	I0729 17:02:16.340195    9524 certs.go:484] found cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/files/etc/ssl/certs/75652.pem (1708 bytes)
	I0729 17:02:16.340533    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:02:16.348170    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 17:02:16.356267    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:02:16.362916    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 17:02:16.369957    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 17:02:16.376654    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 17:02:16.384182    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:02:16.391408    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:02:16.398688    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:02:16.405200    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/7565.pem --> /usr/share/ca-certificates/7565.pem (1338 bytes)
	I0729 17:02:16.412252    9524 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/files/etc/ssl/certs/75652.pem --> /usr/share/ca-certificates/75652.pem (1708 bytes)
	I0729 17:02:16.419838    9524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 17:02:16.425573    9524 ssh_runner.go:195] Run: openssl version
	I0729 17:02:16.427468    9524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:02:16.430558    9524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:02:16.432261    9524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:00 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:02:16.432280    9524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:02:16.434328    9524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:02:16.437135    9524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7565.pem && ln -fs /usr/share/ca-certificates/7565.pem /etc/ssl/certs/7565.pem"
	I0729 17:02:16.440662    9524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7565.pem
	I0729 17:02:16.442093    9524 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 23:48 /usr/share/ca-certificates/7565.pem
	I0729 17:02:16.442112    9524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7565.pem
	I0729 17:02:16.443895    9524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7565.pem /etc/ssl/certs/51391683.0"
	I0729 17:02:16.446900    9524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75652.pem && ln -fs /usr/share/ca-certificates/75652.pem /etc/ssl/certs/75652.pem"
	I0729 17:02:16.449899    9524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75652.pem
	I0729 17:02:16.451218    9524 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 23:48 /usr/share/ca-certificates/75652.pem
	I0729 17:02:16.451238    9524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75652.pem
	I0729 17:02:16.453021    9524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75652.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:02:16.456104    9524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:02:16.457774    9524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 17:02:16.459627    9524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 17:02:16.461552    9524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 17:02:16.463503    9524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 17:02:16.465532    9524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 17:02:16.467297    9524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 17:02:16.469058    9524 kubeadm.go:392] StartCluster: {Name:running-upgrade-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51331 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 17:02:16.469124    9524 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 17:02:16.480413    9524 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 17:02:16.484470    9524 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 17:02:16.484477    9524 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 17:02:16.484506    9524 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 17:02:16.488574    9524 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:02:16.488864    9524 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-449000" does not appear in /Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:02:16.488961    9524 kubeconfig.go:62] /Users/jenkins/minikube-integration/19346-7076/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-449000" cluster setting kubeconfig missing "running-upgrade-449000" context setting]
	I0729 17:02:16.489175    9524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/kubeconfig: {Name:mk580a93ad62a9c0663fd1e6ef1bfe6feb6bde87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:02:16.489593    9524 kapi.go:59] client config for running-upgrade-449000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/client.key", CAFile:"/Users/jenkins/minikube-integration/19346-7076/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10663c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 17:02:16.489925    9524 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 17:02:16.492957    9524 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-449000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 17:02:16.492965    9524 kubeadm.go:1160] stopping kube-system containers ...
	I0729 17:02:16.493006    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 17:02:16.506086    9524 docker.go:483] Stopping containers: [cdc678210e52 7c7db2d04088 22b4b4cd1ae7 246f8d80439a 072f1c4919b2 d3d18a731d9f ec236ce15402 a66242564691 cc6c768e6fc8 7c8bb9b446c0 84ae1004102f 70fd5a08e3e9 a8bf83bfd1bc 8847ff6a3213 ab2a16688222 d5a66a7f2bb8 e20013e50000 6778272e0785 25e79fa48655 7d26f077272f e02cda400ba2 2e0302709969 394a35a8034c e3bb396d60bf 24399e8ef251 823043fd2775 c0574366d294 7eda3d6ba70d 497ba9dfd2c9 d651aee814cb 22e56eb594a5 ba31892f20e5 db69640d9eb5 bcfae1a7158b 0eb187955c4f 3c76e12856f5 a4ce69d3392b 447e1e39ae5e 7a84b6e5445b 6a7e1a0d71dc 13c3912c925f 5d0a9538ddef]
	I0729 17:02:16.506158    9524 ssh_runner.go:195] Run: docker stop cdc678210e52 7c7db2d04088 22b4b4cd1ae7 246f8d80439a 072f1c4919b2 d3d18a731d9f ec236ce15402 a66242564691 cc6c768e6fc8 7c8bb9b446c0 84ae1004102f 70fd5a08e3e9 a8bf83bfd1bc 8847ff6a3213 ab2a16688222 d5a66a7f2bb8 e20013e50000 6778272e0785 25e79fa48655 7d26f077272f e02cda400ba2 2e0302709969 394a35a8034c e3bb396d60bf 24399e8ef251 823043fd2775 c0574366d294 7eda3d6ba70d 497ba9dfd2c9 d651aee814cb 22e56eb594a5 ba31892f20e5 db69640d9eb5 bcfae1a7158b 0eb187955c4f 3c76e12856f5 a4ce69d3392b 447e1e39ae5e 7a84b6e5445b 6a7e1a0d71dc 13c3912c925f 5d0a9538ddef
	I0729 17:02:25.578343    9524 ssh_runner.go:235] Completed: docker stop cdc678210e52 7c7db2d04088 22b4b4cd1ae7 246f8d80439a 072f1c4919b2 d3d18a731d9f ec236ce15402 a66242564691 cc6c768e6fc8 7c8bb9b446c0 84ae1004102f 70fd5a08e3e9 a8bf83bfd1bc 8847ff6a3213 ab2a16688222 d5a66a7f2bb8 e20013e50000 6778272e0785 25e79fa48655 7d26f077272f e02cda400ba2 2e0302709969 394a35a8034c e3bb396d60bf 24399e8ef251 823043fd2775 c0574366d294 7eda3d6ba70d 497ba9dfd2c9 d651aee814cb 22e56eb594a5 ba31892f20e5 db69640d9eb5 bcfae1a7158b 0eb187955c4f 3c76e12856f5 a4ce69d3392b 447e1e39ae5e 7a84b6e5445b 6a7e1a0d71dc 13c3912c925f 5d0a9538ddef: (9.072145125s)
	I0729 17:02:25.578452    9524 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 17:02:25.681288    9524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 17:02:25.684790    9524 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 30 00:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 30 00:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 30 00:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 30 00:01 /etc/kubernetes/scheduler.conf
	
	I0729 17:02:25.684830    9524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/admin.conf
	I0729 17:02:25.687822    9524 kubeadm.go:163] "https://control-plane.minikube.internal:51331" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:02:25.687850    9524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 17:02:25.690969    9524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/kubelet.conf
	I0729 17:02:25.693644    9524 kubeadm.go:163] "https://control-plane.minikube.internal:51331" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:02:25.693666    9524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 17:02:25.696250    9524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/controller-manager.conf
	I0729 17:02:25.699423    9524 kubeadm.go:163] "https://control-plane.minikube.internal:51331" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:02:25.699451    9524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 17:02:25.702586    9524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/scheduler.conf
	I0729 17:02:25.705040    9524 kubeadm.go:163] "https://control-plane.minikube.internal:51331" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:02:25.705061    9524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 17:02:25.707964    9524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 17:02:25.711297    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 17:02:25.777345    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 17:02:26.172855    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 17:02:26.421234    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 17:02:26.451231    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 17:02:26.475604    9524 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:02:26.475668    9524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:02:26.977810    9524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:02:27.477761    9524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:02:27.485115    9524 api_server.go:72] duration metric: took 1.009513584s to wait for apiserver process to appear ...
	I0729 17:02:27.485123    9524 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:02:27.485132    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:32.487246    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:32.487271    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:37.487522    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:37.487580    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:42.488080    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:42.488111    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:47.488574    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:47.488618    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:52.489342    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:52.489410    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:57.490298    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:57.490396    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:02.491417    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:02.491477    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:07.493056    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:07.493122    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:12.495170    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:12.495218    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:17.497481    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:17.497522    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:22.499822    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:22.499882    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:27.502146    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:27.502345    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:03:27.520819    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:03:27.520920    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:03:27.534639    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:03:27.534717    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:03:27.547116    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:03:27.547191    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:03:27.557748    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:03:27.557817    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:03:27.571482    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:03:27.571561    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:03:27.581809    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:03:27.581890    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:03:27.591716    9524 logs.go:276] 0 containers: []
	W0729 17:03:27.591730    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:03:27.591783    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:03:27.601784    9524 logs.go:276] 0 containers: []
	W0729 17:03:27.601794    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:03:27.601799    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:03:27.601805    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:03:27.612940    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:03:27.612954    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:03:27.625734    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:03:27.625747    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:03:27.665756    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:03:27.665767    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:03:27.764177    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:03:27.764193    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:03:27.778561    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:03:27.778572    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:03:27.793199    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:03:27.793212    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:03:27.805469    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:03:27.805483    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:03:27.817699    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:03:27.817712    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:03:27.822200    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:03:27.822208    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:03:27.844861    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:03:27.844872    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:03:27.869030    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:03:27.869037    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:03:27.885455    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:03:27.885465    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:03:27.899265    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:03:27.899277    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:03:27.910789    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:03:27.910801    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:03:27.922587    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:03:27.922600    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:03:27.933921    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:03:27.933933    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:03:30.482695    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:35.484991    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:35.485224    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:03:35.504144    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:03:35.504230    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:03:35.518541    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:03:35.518620    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:03:35.530455    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:03:35.530526    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:03:35.540809    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:03:35.540879    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:03:35.551132    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:03:35.551200    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:03:35.561979    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:03:35.562060    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:03:35.574779    9524 logs.go:276] 0 containers: []
	W0729 17:03:35.574792    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:03:35.574846    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:03:35.586871    9524 logs.go:276] 0 containers: []
	W0729 17:03:35.586884    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:03:35.586890    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:03:35.586896    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:03:35.601503    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:03:35.601514    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:03:35.613520    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:03:35.613531    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:03:35.626273    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:03:35.626286    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:03:35.638793    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:03:35.638804    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:03:35.652673    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:03:35.652688    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:03:35.692088    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:03:35.692099    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:03:35.703912    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:03:35.703924    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:03:35.715423    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:03:35.715434    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:03:35.739721    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:03:35.739730    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:03:35.778889    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:03:35.778899    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:03:35.792525    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:03:35.792536    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:03:35.804302    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:03:35.804313    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:03:35.815831    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:03:35.815843    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:03:35.821017    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:03:35.821028    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:03:35.834594    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:03:35.834604    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:03:35.852044    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:03:35.852055    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:03:38.398717    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:43.401167    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:43.401601    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:03:43.434235    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:03:43.434359    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:03:43.454565    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:03:43.454660    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:03:43.468342    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:03:43.468427    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:03:43.480897    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:03:43.480961    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:03:43.491982    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:03:43.492055    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:03:43.503199    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:03:43.503268    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:03:43.514389    9524 logs.go:276] 0 containers: []
	W0729 17:03:43.514400    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:03:43.514458    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:03:43.525546    9524 logs.go:276] 0 containers: []
	W0729 17:03:43.525557    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:03:43.525562    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:03:43.525567    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:03:43.572715    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:03:43.572725    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:03:43.577725    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:03:43.577731    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:03:43.593279    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:03:43.593294    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:03:43.612329    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:03:43.612340    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:03:43.627837    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:03:43.627851    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:03:43.639717    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:03:43.639729    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:03:43.677769    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:03:43.677781    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:03:43.691698    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:03:43.691708    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:03:43.703052    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:03:43.703064    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:03:43.715237    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:03:43.715249    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:03:43.727044    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:03:43.727055    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:03:43.738605    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:03:43.738615    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:03:43.764338    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:03:43.764346    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:03:43.776392    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:03:43.776405    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:03:43.813392    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:03:43.813403    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:03:43.825685    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:03:43.825700    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:03:46.337971    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:51.340442    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:51.340528    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:03:51.355830    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:03:51.355900    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:03:51.366359    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:03:51.366429    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:03:51.376285    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:03:51.376364    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:03:51.387201    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:03:51.387276    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:03:51.398307    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:03:51.398371    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:03:51.413593    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:03:51.413659    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:03:51.424160    9524 logs.go:276] 0 containers: []
	W0729 17:03:51.424170    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:03:51.424238    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:03:51.434472    9524 logs.go:276] 0 containers: []
	W0729 17:03:51.434484    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:03:51.434490    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:03:51.434496    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:03:51.462678    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:03:51.462689    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:03:51.479876    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:03:51.479885    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:03:51.490713    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:03:51.490724    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:03:51.514708    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:03:51.514716    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:03:51.558729    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:03:51.558744    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:03:51.570130    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:03:51.570143    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:03:51.582154    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:03:51.582165    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:03:51.593439    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:03:51.593452    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:03:51.607015    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:03:51.607027    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:03:51.611669    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:03:51.611676    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:03:51.649391    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:03:51.649402    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:03:51.663693    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:03:51.663707    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:03:51.677813    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:03:51.677823    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:03:51.715507    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:03:51.715518    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:03:51.729479    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:03:51.729492    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:03:51.741051    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:03:51.741061    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:03:54.255017    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:59.257753    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:59.258075    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:03:59.286672    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:03:59.286801    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:03:59.306895    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:03:59.306970    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:03:59.320762    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:03:59.320838    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:03:59.332910    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:03:59.333002    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:03:59.350213    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:03:59.350288    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:03:59.361184    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:03:59.361257    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:03:59.371754    9524 logs.go:276] 0 containers: []
	W0729 17:03:59.371764    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:03:59.371825    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:03:59.382161    9524 logs.go:276] 0 containers: []
	W0729 17:03:59.382170    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:03:59.382176    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:03:59.382182    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:03:59.428084    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:03:59.428097    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:03:59.467708    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:03:59.467720    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:03:59.479807    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:03:59.479818    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:03:59.492020    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:03:59.492030    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:03:59.496963    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:03:59.496974    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:03:59.533594    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:03:59.533607    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:03:59.548669    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:03:59.548680    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:03:59.560728    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:03:59.560745    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:03:59.578216    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:03:59.578231    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:03:59.590741    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:03:59.590759    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:03:59.601878    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:03:59.601890    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:03:59.622235    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:03:59.622245    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:03:59.646764    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:03:59.646775    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:03:59.664274    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:03:59.664283    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:03:59.675993    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:03:59.676005    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:03:59.687038    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:03:59.687048    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:04:02.200629    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:07.203149    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:07.203431    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:07.233383    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:04:07.233521    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:07.251509    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:04:07.251605    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:07.265518    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:04:07.265602    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:07.277690    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:04:07.277763    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:07.288791    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:04:07.288858    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:07.300592    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:04:07.300665    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:07.311002    9524 logs.go:276] 0 containers: []
	W0729 17:04:07.311016    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:07.311071    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:07.321381    9524 logs.go:276] 0 containers: []
	W0729 17:04:07.321393    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:04:07.321400    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:07.321406    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:07.326100    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:04:07.326109    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:04:07.340223    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:04:07.340235    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:04:07.350997    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:04:07.351009    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:04:07.362997    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:04:07.363010    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:04:07.377096    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:04:07.377105    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:04:07.388118    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:04:07.388135    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:04:07.399665    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:07.399678    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:07.427115    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:04:07.427125    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:04:07.444348    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:04:07.444360    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:04:07.456801    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:04:07.456813    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:04:07.468966    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:04:07.468977    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:04:07.486444    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:07.486455    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:07.530410    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:07.530417    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:07.565580    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:04:07.565591    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:04:07.604594    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:04:07.604605    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:04:07.616830    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:04:07.616841    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:10.131334    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:15.133217    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:15.133592    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:15.166188    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:04:15.166322    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:15.185878    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:04:15.185975    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:15.201715    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:04:15.201792    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:15.213524    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:04:15.213603    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:15.224458    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:04:15.224534    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:15.235256    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:04:15.235323    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:15.246246    9524 logs.go:276] 0 containers: []
	W0729 17:04:15.246256    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:15.246311    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:15.256939    9524 logs.go:276] 0 containers: []
	W0729 17:04:15.256953    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:04:15.256960    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:04:15.256966    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:04:15.299387    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:04:15.299398    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:04:15.313751    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:04:15.313762    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:04:15.326726    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:04:15.326738    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:04:15.338534    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:04:15.338545    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:04:15.351205    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:15.351215    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:15.398654    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:15.398666    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:15.434114    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:04:15.434127    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:04:15.446953    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:04:15.446968    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:04:15.464155    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:04:15.464169    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:15.479117    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:04:15.479128    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:04:15.505293    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:04:15.505308    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:04:15.518084    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:15.518094    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:15.543756    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:15.543767    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:15.548153    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:04:15.548159    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:04:15.562718    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:04:15.562729    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:04:15.575379    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:04:15.575393    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:04:18.095593    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:23.098064    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:23.098527    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:23.135742    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:04:23.135883    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:23.157976    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:04:23.158087    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:23.172675    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:04:23.172756    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:23.185034    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:04:23.185108    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:23.197252    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:04:23.197327    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:23.208646    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:04:23.208727    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:23.219688    9524 logs.go:276] 0 containers: []
	W0729 17:04:23.219702    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:23.219759    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:23.235501    9524 logs.go:276] 0 containers: []
	W0729 17:04:23.235516    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:04:23.235522    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:23.235529    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:23.276636    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:04:23.276649    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:04:23.289915    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:04:23.289927    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:04:23.307305    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:23.307315    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:23.351872    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:04:23.351881    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:04:23.391190    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:04:23.391201    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:04:23.405986    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:04:23.405997    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:04:23.418093    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:04:23.418108    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:04:23.432176    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:04:23.432188    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:23.444346    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:23.444356    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:23.449219    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:04:23.449226    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:04:23.464210    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:04:23.464221    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:04:23.482293    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:04:23.482303    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:04:23.493629    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:04:23.493639    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:04:23.511757    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:23.511768    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:23.536199    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:04:23.536211    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:04:23.548213    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:04:23.548223    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:04:26.062529    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:31.064909    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:31.065090    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:31.083778    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:04:31.083861    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:31.097838    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:04:31.097910    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:31.109636    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:04:31.109709    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:31.125951    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:04:31.126017    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:31.136424    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:04:31.136487    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:31.146883    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:04:31.146949    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:31.157393    9524 logs.go:276] 0 containers: []
	W0729 17:04:31.157403    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:31.157455    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:31.168628    9524 logs.go:276] 0 containers: []
	W0729 17:04:31.168639    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:04:31.168645    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:04:31.168649    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:04:31.180058    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:04:31.180069    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:04:31.194495    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:04:31.194507    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:31.206869    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:04:31.206880    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:04:31.244852    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:04:31.244863    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:04:31.256947    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:31.256958    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:31.282790    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:31.282806    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:31.329803    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:31.329814    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:31.365059    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:04:31.365071    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:04:31.385329    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:04:31.385343    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:04:31.397895    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:04:31.397908    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:04:31.410312    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:04:31.410323    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:04:31.427652    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:04:31.427662    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:04:31.438722    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:04:31.438734    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:04:31.453225    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:04:31.453237    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:04:31.468323    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:04:31.468334    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:04:31.479786    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:31.479796    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:33.986451    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:38.987333    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:38.987561    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:39.011184    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:04:39.011313    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:39.027845    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:04:39.027922    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:39.040723    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:04:39.040791    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:39.052215    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:04:39.052276    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:39.062411    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:04:39.062482    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:39.073508    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:04:39.073569    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:39.083774    9524 logs.go:276] 0 containers: []
	W0729 17:04:39.083787    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:39.083844    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:39.094075    9524 logs.go:276] 0 containers: []
	W0729 17:04:39.094087    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:04:39.094092    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:04:39.094097    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:04:39.132987    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:04:39.132999    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:04:39.144506    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:04:39.144518    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:04:39.156780    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:04:39.156790    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:04:39.168651    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:39.168665    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:39.215442    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:39.215452    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:39.219845    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:04:39.219853    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:04:39.231778    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:04:39.231791    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:04:39.250363    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:39.250372    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:39.274233    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:04:39.274242    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:04:39.288689    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:04:39.288699    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:04:39.303600    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:04:39.303608    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:04:39.316701    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:39.316712    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:39.352235    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:04:39.352244    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:04:39.366264    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:04:39.366275    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:39.379411    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:04:39.379425    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:04:39.391061    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:04:39.391074    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:04:41.904636    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:46.907058    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:46.907374    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:46.948964    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:04:46.949068    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:46.970371    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:04:46.970460    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:46.982804    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:04:46.982880    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:46.995460    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:04:46.995533    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:47.012399    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:04:47.012464    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:47.023548    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:04:47.023619    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:47.033715    9524 logs.go:276] 0 containers: []
	W0729 17:04:47.033726    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:47.033811    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:47.044607    9524 logs.go:276] 0 containers: []
	W0729 17:04:47.044618    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:04:47.044623    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:04:47.044629    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:04:47.062014    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:47.062024    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:47.086633    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:04:47.086641    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:47.098250    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:47.098262    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:47.134329    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:04:47.134345    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:04:47.148842    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:04:47.148853    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:04:47.167718    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:04:47.167730    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:04:47.184722    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:04:47.184732    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:04:47.197749    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:04:47.197762    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:04:47.212827    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:04:47.212843    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:04:47.251086    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:04:47.251101    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:04:47.265313    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:04:47.265330    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:04:47.277945    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:47.277959    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:47.322688    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:47.322696    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:47.326902    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:04:47.326911    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:04:47.338109    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:04:47.338121    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:04:47.351165    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:04:47.351179    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:04:49.864269    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:54.866605    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:54.866820    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:54.888900    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:04:54.889005    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:54.911333    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:04:54.911412    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:54.923053    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:04:54.923126    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:54.933562    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:04:54.933636    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:54.943555    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:04:54.943624    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:54.954101    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:04:54.954176    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:54.965095    9524 logs.go:276] 0 containers: []
	W0729 17:04:54.965108    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:54.965163    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:54.975058    9524 logs.go:276] 0 containers: []
	W0729 17:04:54.975070    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:04:54.975076    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:04:54.975083    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:54.986761    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:54.986774    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:54.991074    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:04:54.991080    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:04:55.005648    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:04:55.005662    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:04:55.017920    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:55.017931    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:55.042575    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:04:55.042582    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:04:55.057065    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:55.057078    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:55.103086    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:04:55.103095    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:04:55.118178    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:04:55.118191    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:04:55.130227    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:04:55.130238    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:04:55.147592    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:04:55.147601    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:04:55.159008    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:04:55.159019    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:04:55.170273    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:04:55.170283    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:04:55.182127    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:04:55.182136    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:04:55.203099    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:55.203114    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:55.238587    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:04:55.238602    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:04:55.253281    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:04:55.253290    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:04:57.793720    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:02.796345    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:02.797868    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:05:02.832468    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:05:02.832577    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:05:02.848153    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:05:02.848240    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:05:02.860228    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:05:02.860301    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:05:02.871311    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:05:02.871383    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:05:02.882249    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:05:02.882318    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:05:02.892700    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:05:02.892766    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:05:02.907103    9524 logs.go:276] 0 containers: []
	W0729 17:05:02.907118    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:05:02.907182    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:05:02.917559    9524 logs.go:276] 0 containers: []
	W0729 17:05:02.917573    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:05:02.917579    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:05:02.917586    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:05:02.932292    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:05:02.932305    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:05:02.943373    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:05:02.943383    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:05:02.955473    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:05:02.955486    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:05:02.968009    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:05:02.968020    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:05:03.015405    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:05:03.015414    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:05:03.036820    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:05:03.036832    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:05:03.051006    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:05:03.051019    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:05:03.065326    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:05:03.065340    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:05:03.077158    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:05:03.077169    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:05:03.102055    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:05:03.102067    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:05:03.113358    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:05:03.113373    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:05:03.127107    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:05:03.127119    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:05:03.138678    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:05:03.138691    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:05:03.143236    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:05:03.143243    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:05:03.179079    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:05:03.179090    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:05:03.220269    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:05:03.220281    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:05:05.734728    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:10.737122    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:10.737267    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:05:10.750535    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:05:10.750614    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:05:10.762507    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:05:10.762581    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:05:10.773263    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:05:10.773338    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:05:10.783886    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:05:10.783957    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:05:10.794491    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:05:10.794560    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:05:10.805013    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:05:10.805087    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:05:10.818722    9524 logs.go:276] 0 containers: []
	W0729 17:05:10.818733    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:05:10.818791    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:05:10.839011    9524 logs.go:276] 0 containers: []
	W0729 17:05:10.839028    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:05:10.839034    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:05:10.839040    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:05:10.858278    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:05:10.858289    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:05:10.906449    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:05:10.906462    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:05:10.918460    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:05:10.918474    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:05:10.942881    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:05:10.942890    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:05:10.955116    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:05:10.955129    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:05:10.967172    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:05:10.967184    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:05:10.979304    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:05:10.979314    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:05:10.993097    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:05:10.993108    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:05:11.007397    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:05:11.007408    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:05:11.023146    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:05:11.023157    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:05:11.035143    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:05:11.035153    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:05:11.046285    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:05:11.046297    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:05:11.064909    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:05:11.064921    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:05:11.076127    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:05:11.076140    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:05:11.080768    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:05:11.080774    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:05:11.121563    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:05:11.121578    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:05:13.662208    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:18.664658    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:18.664835    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:05:18.676225    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:05:18.676307    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:05:18.691384    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:05:18.691458    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:05:18.702040    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:05:18.702113    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:05:18.713461    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:05:18.713536    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:05:18.724540    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:05:18.724604    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:05:18.734904    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:05:18.734974    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:05:18.745561    9524 logs.go:276] 0 containers: []
	W0729 17:05:18.745574    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:05:18.745629    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:05:18.755848    9524 logs.go:276] 0 containers: []
	W0729 17:05:18.755860    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:05:18.755866    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:05:18.755871    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:05:18.771175    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:05:18.771187    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:05:18.784870    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:05:18.784880    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:05:18.798630    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:05:18.798640    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:05:18.810534    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:05:18.810545    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:05:18.835024    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:05:18.835035    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:05:18.847092    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:05:18.847103    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:05:18.882060    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:05:18.882070    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:05:18.896308    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:05:18.896319    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:05:18.910993    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:05:18.911007    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:05:18.922550    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:05:18.922564    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:05:18.969091    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:05:18.969108    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:05:18.985732    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:05:18.985744    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:05:18.997476    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:05:18.997489    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:05:19.009510    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:05:19.009520    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:05:19.030862    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:05:19.030873    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:05:19.035270    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:05:19.035279    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:05:21.578417    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:26.580994    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:26.581182    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:05:26.596215    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:05:26.596298    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:05:26.608774    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:05:26.608839    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:05:26.619585    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:05:26.619652    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:05:26.630137    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:05:26.630204    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:05:26.640573    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:05:26.640639    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:05:26.651459    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:05:26.651528    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:05:26.668300    9524 logs.go:276] 0 containers: []
	W0729 17:05:26.668311    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:05:26.668371    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:05:26.678352    9524 logs.go:276] 0 containers: []
	W0729 17:05:26.678367    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:05:26.678372    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:05:26.678378    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:05:26.692207    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:05:26.692217    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:05:26.706755    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:05:26.706765    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:05:26.719426    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:05:26.719438    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:05:26.733491    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:05:26.733501    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:05:26.778273    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:05:26.778287    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:05:26.782731    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:05:26.782740    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:05:26.816737    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:05:26.816751    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:05:26.855421    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:05:26.855433    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:05:26.866396    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:05:26.866410    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:05:26.878026    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:05:26.878037    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:05:26.890083    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:05:26.890093    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:05:26.903623    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:05:26.903633    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:05:26.914658    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:05:26.914674    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:05:26.926513    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:05:26.926523    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:05:26.938493    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:05:26.938507    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:05:26.955771    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:05:26.955781    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:05:29.481375    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:34.484172    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:34.484419    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:05:34.503194    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:05:34.503287    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:05:34.516910    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:05:34.516989    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:05:34.530749    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:05:34.530824    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:05:34.542220    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:05:34.542284    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:05:34.552571    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:05:34.552640    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:05:34.563420    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:05:34.563491    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:05:34.574801    9524 logs.go:276] 0 containers: []
	W0729 17:05:34.574813    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:05:34.574877    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:05:34.587411    9524 logs.go:276] 0 containers: []
	W0729 17:05:34.587423    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:05:34.587428    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:05:34.587434    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:05:34.602777    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:05:34.602788    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:05:34.619089    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:05:34.619101    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:05:34.630835    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:05:34.630846    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:05:34.670061    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:05:34.670071    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:05:34.684666    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:05:34.684676    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:05:34.697132    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:05:34.697142    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:05:34.713798    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:05:34.713808    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:05:34.736002    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:05:34.736009    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:05:34.740714    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:05:34.740719    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:05:34.755895    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:05:34.755905    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:05:34.767529    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:05:34.767539    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:05:34.779244    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:05:34.779255    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:05:34.791061    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:05:34.791072    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:05:34.835064    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:05:34.835072    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:05:34.871030    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:05:34.871040    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:05:34.885066    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:05:34.885077    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:05:37.398958    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:42.401215    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:42.401493    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:05:42.425752    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:05:42.425859    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:05:42.444278    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:05:42.444367    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:05:42.457330    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:05:42.457409    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:05:42.468908    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:05:42.468978    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:05:42.479984    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:05:42.480061    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:05:42.494213    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:05:42.494291    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:05:42.506027    9524 logs.go:276] 0 containers: []
	W0729 17:05:42.506038    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:05:42.506098    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:05:42.516458    9524 logs.go:276] 0 containers: []
	W0729 17:05:42.516468    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:05:42.516474    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:05:42.516479    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:05:42.521180    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:05:42.521187    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:05:42.536990    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:05:42.537001    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:05:42.548882    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:05:42.548892    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:05:42.566491    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:05:42.566502    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:05:42.613244    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:05:42.613255    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:05:42.628130    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:05:42.628140    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:05:42.640230    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:05:42.640240    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:05:42.664594    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:05:42.664604    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:05:42.701571    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:05:42.701582    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:05:42.739953    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:05:42.739965    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:05:42.754010    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:05:42.754023    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:05:42.770300    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:05:42.770314    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:05:42.782006    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:05:42.782019    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:05:42.793508    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:05:42.793518    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:05:42.810351    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:05:42.810365    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:05:42.822606    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:05:42.822618    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:05:45.335746    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:50.338009    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:50.338112    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:05:50.349652    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:05:50.349728    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:05:50.361458    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:05:50.361535    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:05:50.372346    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:05:50.372430    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:05:50.387392    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:05:50.387459    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:05:50.398748    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:05:50.398816    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:05:50.409485    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:05:50.409564    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:05:50.419930    9524 logs.go:276] 0 containers: []
	W0729 17:05:50.419941    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:05:50.420003    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:05:50.434683    9524 logs.go:276] 0 containers: []
	W0729 17:05:50.434694    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:05:50.434701    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:05:50.434708    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:05:50.483595    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:05:50.483616    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:05:50.499755    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:05:50.499767    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:05:50.513254    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:05:50.513263    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:05:50.551545    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:05:50.551558    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:05:50.568187    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:05:50.568198    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:05:50.591888    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:05:50.591901    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:05:50.596609    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:05:50.596618    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:05:50.638476    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:05:50.638488    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:05:50.654476    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:05:50.654494    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:05:50.666866    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:05:50.666874    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:05:50.679242    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:05:50.679257    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:05:50.693968    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:05:50.693983    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:05:50.707487    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:05:50.707499    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:05:50.720917    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:05:50.720928    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:05:50.734295    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:05:50.734309    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:05:50.746699    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:05:50.746713    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:05:53.272803    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:58.275116    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:58.275262    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:05:58.288331    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:05:58.288406    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:05:58.304946    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:05:58.305018    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:05:58.315608    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:05:58.315677    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:05:58.326396    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:05:58.326461    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:05:58.336884    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:05:58.336946    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:05:58.347535    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:05:58.347609    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:05:58.357678    9524 logs.go:276] 0 containers: []
	W0729 17:05:58.357689    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:05:58.357747    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:05:58.368416    9524 logs.go:276] 0 containers: []
	W0729 17:05:58.368428    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:05:58.368433    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:05:58.368439    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:05:58.416770    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:05:58.416786    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:05:58.421360    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:05:58.421366    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:05:58.435956    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:05:58.435971    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:05:58.450741    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:05:58.450757    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:05:58.489562    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:05:58.489572    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:05:58.507634    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:05:58.507648    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:05:58.518795    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:05:58.518808    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:05:58.536073    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:05:58.536084    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:05:58.548374    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:05:58.548384    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:05:58.560425    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:05:58.560439    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:05:58.584006    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:05:58.584012    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:05:58.595778    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:05:58.595790    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:05:58.609213    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:05:58.609228    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:05:58.620412    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:05:58.620423    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:05:58.632316    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:05:58.632329    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:05:58.670243    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:05:58.670254    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:06:01.189794    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:06.192095    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:06.192340    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:06:06.212053    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:06:06.212138    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:06:06.228103    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:06:06.228169    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:06:06.239541    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:06:06.239608    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:06:06.251689    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:06:06.251740    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:06:06.262321    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:06:06.262388    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:06:06.273164    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:06:06.273230    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:06:06.283823    9524 logs.go:276] 0 containers: []
	W0729 17:06:06.283833    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:06:06.283879    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:06:06.294495    9524 logs.go:276] 0 containers: []
	W0729 17:06:06.294505    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:06:06.294511    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:06:06.294516    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:06:06.308591    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:06:06.308603    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:06:06.320551    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:06:06.320561    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:06:06.338615    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:06:06.338628    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:06:06.362952    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:06:06.362965    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:06:06.411415    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:06:06.411427    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:06:06.416466    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:06:06.416472    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:06:06.452247    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:06:06.452258    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:06:06.472062    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:06:06.472072    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:06:06.483345    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:06:06.483357    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:06:06.494615    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:06:06.494628    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:06:06.507515    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:06:06.507528    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:06:06.546761    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:06:06.546773    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:06:06.562025    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:06:06.562036    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:06:06.573252    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:06:06.573263    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:06:06.585012    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:06:06.585024    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:06:06.597078    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:06:06.597089    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:06:09.110717    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:14.113163    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:14.113389    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:06:14.139504    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:06:14.139627    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:06:14.156864    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:06:14.156955    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:06:14.170667    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:06:14.170740    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:06:14.181719    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:06:14.181793    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:06:14.194848    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:06:14.194920    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:06:14.205506    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:06:14.205576    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:06:14.217730    9524 logs.go:276] 0 containers: []
	W0729 17:06:14.217741    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:06:14.217806    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:06:14.230739    9524 logs.go:276] 0 containers: []
	W0729 17:06:14.230750    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:06:14.230756    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:06:14.230763    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:06:14.235878    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:06:14.235885    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:06:14.302237    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:06:14.302253    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:06:14.313355    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:06:14.313369    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:06:14.327277    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:06:14.327292    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:06:14.373356    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:06:14.373386    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:06:14.391057    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:06:14.391071    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:06:14.408870    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:06:14.408880    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:06:14.420251    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:06:14.420262    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:06:14.442487    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:06:14.442494    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:06:14.478642    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:06:14.478657    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:06:14.492819    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:06:14.492830    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:06:14.504425    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:06:14.504437    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:06:14.516332    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:06:14.516343    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:06:14.531904    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:06:14.531916    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:06:14.546397    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:06:14.546407    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:06:14.557938    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:06:14.557949    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:06:17.074730    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:22.076233    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:22.076566    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:06:22.108809    9524 logs.go:276] 2 containers: [fbe5f61abb4b a8bf83bfd1bc]
	I0729 17:06:22.108943    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:06:22.127338    9524 logs.go:276] 2 containers: [4e7d0f8fa990 e20013e50000]
	I0729 17:06:22.127425    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:06:22.141558    9524 logs.go:276] 2 containers: [47261b655e1a 22b4b4cd1ae7]
	I0729 17:06:22.141631    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:06:22.153758    9524 logs.go:276] 2 containers: [fd91868396a8 8847ff6a3213]
	I0729 17:06:22.153835    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:06:22.164586    9524 logs.go:276] 2 containers: [c7edf71c3bda cc6c768e6fc8]
	I0729 17:06:22.164664    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:06:22.176494    9524 logs.go:276] 2 containers: [392c2691ec7b a66242564691]
	I0729 17:06:22.176567    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:06:22.186693    9524 logs.go:276] 0 containers: []
	W0729 17:06:22.186704    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:06:22.186758    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:06:22.198396    9524 logs.go:276] 0 containers: []
	W0729 17:06:22.198407    9524 logs.go:278] No container was found matching "storage-provisioner"
	I0729 17:06:22.198413    9524 logs.go:123] Gathering logs for etcd [4e7d0f8fa990] ...
	I0729 17:06:22.198418    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7d0f8fa990"
	I0729 17:06:22.216940    9524 logs.go:123] Gathering logs for coredns [22b4b4cd1ae7] ...
	I0729 17:06:22.216954    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22b4b4cd1ae7"
	I0729 17:06:22.228503    9524 logs.go:123] Gathering logs for kube-scheduler [fd91868396a8] ...
	I0729 17:06:22.228513    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd91868396a8"
	I0729 17:06:22.240490    9524 logs.go:123] Gathering logs for kube-controller-manager [392c2691ec7b] ...
	I0729 17:06:22.240501    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 392c2691ec7b"
	I0729 17:06:22.264704    9524 logs.go:123] Gathering logs for kube-apiserver [a8bf83bfd1bc] ...
	I0729 17:06:22.264717    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8bf83bfd1bc"
	I0729 17:06:22.303616    9524 logs.go:123] Gathering logs for coredns [47261b655e1a] ...
	I0729 17:06:22.303626    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47261b655e1a"
	I0729 17:06:22.315320    9524 logs.go:123] Gathering logs for kube-scheduler [8847ff6a3213] ...
	I0729 17:06:22.315330    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8847ff6a3213"
	I0729 17:06:22.336278    9524 logs.go:123] Gathering logs for kube-proxy [c7edf71c3bda] ...
	I0729 17:06:22.336288    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7edf71c3bda"
	I0729 17:06:22.348672    9524 logs.go:123] Gathering logs for kube-controller-manager [a66242564691] ...
	I0729 17:06:22.348682    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a66242564691"
	I0729 17:06:22.366131    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:06:22.366145    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:06:22.400716    9524 logs.go:123] Gathering logs for kube-apiserver [fbe5f61abb4b] ...
	I0729 17:06:22.400727    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe5f61abb4b"
	I0729 17:06:22.423985    9524 logs.go:123] Gathering logs for kube-proxy [cc6c768e6fc8] ...
	I0729 17:06:22.423997    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc6c768e6fc8"
	I0729 17:06:22.435579    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:06:22.435594    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:06:22.459323    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:06:22.459331    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:06:22.507181    9524 logs.go:123] Gathering logs for etcd [e20013e50000] ...
	I0729 17:06:22.507189    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20013e50000"
	I0729 17:06:22.521754    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:06:22.521765    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:06:22.536577    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:06:22.536587    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:06:25.042136    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:30.044400    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:30.044456    9524 kubeadm.go:597] duration metric: took 4m13.560126375s to restartPrimaryControlPlane
	W0729 17:06:30.044506    9524 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 17:06:30.044533    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 17:06:30.996081    9524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:06:31.001528    9524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 17:06:31.004605    9524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 17:06:31.007430    9524 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 17:06:31.007436    9524 kubeadm.go:157] found existing configuration files:
	
	I0729 17:06:31.007460    9524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/admin.conf
	I0729 17:06:31.010094    9524 kubeadm.go:163] "https://control-plane.minikube.internal:51331" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 17:06:31.010120    9524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 17:06:31.012752    9524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/kubelet.conf
	I0729 17:06:31.015628    9524 kubeadm.go:163] "https://control-plane.minikube.internal:51331" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 17:06:31.015651    9524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 17:06:31.018684    9524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/controller-manager.conf
	I0729 17:06:31.021250    9524 kubeadm.go:163] "https://control-plane.minikube.internal:51331" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 17:06:31.021268    9524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 17:06:31.023816    9524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/scheduler.conf
	I0729 17:06:31.026982    9524 kubeadm.go:163] "https://control-plane.minikube.internal:51331" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51331 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 17:06:31.027013    9524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 17:06:31.029851    9524 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 17:06:31.046942    9524 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 17:06:31.047052    9524 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 17:06:31.092959    9524 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 17:06:31.093008    9524 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 17:06:31.093061    9524 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 17:06:31.145426    9524 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 17:06:31.149506    9524 out.go:204]   - Generating certificates and keys ...
	I0729 17:06:31.149545    9524 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 17:06:31.149576    9524 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 17:06:31.149632    9524 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 17:06:31.149673    9524 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 17:06:31.149780    9524 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 17:06:31.149812    9524 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 17:06:31.149842    9524 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 17:06:31.149872    9524 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 17:06:31.149903    9524 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 17:06:31.149942    9524 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 17:06:31.149964    9524 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 17:06:31.149987    9524 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 17:06:31.380943    9524 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 17:06:31.435974    9524 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 17:06:31.490623    9524 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 17:06:31.569400    9524 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 17:06:31.597769    9524 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 17:06:31.598179    9524 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 17:06:31.598208    9524 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 17:06:31.680570    9524 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 17:06:31.685236    9524 out.go:204]   - Booting up control plane ...
	I0729 17:06:31.685286    9524 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 17:06:31.685330    9524 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 17:06:31.685365    9524 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 17:06:31.685407    9524 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 17:06:31.686055    9524 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 17:06:36.186097    9524 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502019 seconds
	I0729 17:06:36.186236    9524 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 17:06:36.190726    9524 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 17:06:36.715682    9524 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 17:06:36.716142    9524 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-449000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 17:06:37.221309    9524 kubeadm.go:310] [bootstrap-token] Using token: bl5nbb.2ai5ar4i6emr4q3n
	I0729 17:06:37.224262    9524 out.go:204]   - Configuring RBAC rules ...
	I0729 17:06:37.224341    9524 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 17:06:37.245954    9524 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 17:06:37.249246    9524 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 17:06:37.252328    9524 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 17:06:37.253398    9524 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 17:06:37.254224    9524 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 17:06:37.257383    9524 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 17:06:37.450587    9524 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 17:06:37.648304    9524 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 17:06:37.648658    9524 kubeadm.go:310] 
	I0729 17:06:37.648685    9524 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 17:06:37.648689    9524 kubeadm.go:310] 
	I0729 17:06:37.648726    9524 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 17:06:37.648734    9524 kubeadm.go:310] 
	I0729 17:06:37.648753    9524 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 17:06:37.648785    9524 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 17:06:37.648819    9524 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 17:06:37.648822    9524 kubeadm.go:310] 
	I0729 17:06:37.648848    9524 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 17:06:37.648856    9524 kubeadm.go:310] 
	I0729 17:06:37.648884    9524 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 17:06:37.648887    9524 kubeadm.go:310] 
	I0729 17:06:37.648912    9524 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 17:06:37.648956    9524 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 17:06:37.649009    9524 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 17:06:37.649012    9524 kubeadm.go:310] 
	I0729 17:06:37.649057    9524 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 17:06:37.649098    9524 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 17:06:37.649103    9524 kubeadm.go:310] 
	I0729 17:06:37.649153    9524 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bl5nbb.2ai5ar4i6emr4q3n \
	I0729 17:06:37.649211    9524 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0590c93eea840245319f62698163347e7b5c66f98e4c9e27c4a0315b2e5764a4 \
	I0729 17:06:37.649223    9524 kubeadm.go:310] 	--control-plane 
	I0729 17:06:37.649228    9524 kubeadm.go:310] 
	I0729 17:06:37.649276    9524 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 17:06:37.649281    9524 kubeadm.go:310] 
	I0729 17:06:37.649323    9524 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bl5nbb.2ai5ar4i6emr4q3n \
	I0729 17:06:37.649382    9524 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0590c93eea840245319f62698163347e7b5c66f98e4c9e27c4a0315b2e5764a4 
	I0729 17:06:37.649581    9524 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 17:06:37.649590    9524 cni.go:84] Creating CNI manager for ""
	I0729 17:06:37.649599    9524 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:06:37.654216    9524 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 17:06:37.662055    9524 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 17:06:37.664951    9524 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 17:06:37.669587    9524 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 17:06:37.669631    9524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:06:37.669655    9524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-449000 minikube.k8s.io/updated_at=2024_07_29T17_06_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500 minikube.k8s.io/name=running-upgrade-449000 minikube.k8s.io/primary=true
	I0729 17:06:37.719081    9524 kubeadm.go:1113] duration metric: took 49.480959ms to wait for elevateKubeSystemPrivileges
	I0729 17:06:37.719107    9524 ops.go:34] apiserver oom_adj: -16
	I0729 17:06:37.719113    9524 kubeadm.go:394] duration metric: took 4m21.250214791s to StartCluster
	I0729 17:06:37.719124    9524 settings.go:142] acquiring lock: {Name:mke03e8e29c1ffe5c4cd19f776f54e7d6bc684a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:06:37.719216    9524 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:06:37.719650    9524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/kubeconfig: {Name:mk580a93ad62a9c0663fd1e6ef1bfe6feb6bde87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:06:37.719832    9524 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:06:37.719989    9524 config.go:182] Loaded profile config "running-upgrade-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 17:06:37.719906    9524 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 17:06:37.720022    9524 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-449000"
	I0729 17:06:37.720031    9524 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-449000"
	I0729 17:06:37.720034    9524 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-449000"
	W0729 17:06:37.720037    9524 addons.go:243] addon storage-provisioner should already be in state true
	I0729 17:06:37.720040    9524 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-449000"
	I0729 17:06:37.720049    9524 host.go:66] Checking if "running-upgrade-449000" exists ...
	I0729 17:06:37.721112    9524 kapi.go:59] client config for running-upgrade-449000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/running-upgrade-449000/client.key", CAFile:"/Users/jenkins/minikube-integration/19346-7076/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10663c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 17:06:37.721236    9524 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-449000"
	W0729 17:06:37.721241    9524 addons.go:243] addon default-storageclass should already be in state true
	I0729 17:06:37.721247    9524 host.go:66] Checking if "running-upgrade-449000" exists ...
	I0729 17:06:37.724156    9524 out.go:177] * Verifying Kubernetes components...
	I0729 17:06:37.724575    9524 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 17:06:37.727374    9524 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 17:06:37.727380    9524 sshutil.go:53] new ssh client: &{IP:localhost Port:51264 SSHKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/running-upgrade-449000/id_rsa Username:docker}
	I0729 17:06:37.728301    9524 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:06:37.732180    9524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:06:37.735227    9524 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:06:37.735234    9524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 17:06:37.735240    9524 sshutil.go:53] new ssh client: &{IP:localhost Port:51264 SSHKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/running-upgrade-449000/id_rsa Username:docker}
	I0729 17:06:37.813793    9524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:06:37.819306    9524 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:06:37.819354    9524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:06:37.823129    9524 api_server.go:72] duration metric: took 103.285917ms to wait for apiserver process to appear ...
	I0729 17:06:37.823136    9524 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:06:37.823143    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:37.857875    9524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 17:06:37.862803    9524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:06:42.825316    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:42.825362    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:47.825785    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:47.825811    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:52.826160    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:52.826179    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:57.826637    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:57.826667    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:02.827706    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:02.827729    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:07.828574    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:07.828621    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 17:07:08.214921    9524 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 17:07:08.220272    9524 out.go:177] * Enabled addons: storage-provisioner
	I0729 17:07:08.229150    9524 addons.go:510] duration metric: took 30.509276041s for enable addons: enabled=[storage-provisioner]
	I0729 17:07:12.829780    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:12.829818    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:17.831212    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:17.831233    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:22.832972    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:22.833012    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:27.835338    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:27.835365    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:32.836556    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:32.836582    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:37.838698    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:37.838851    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:07:37.852007    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:07:37.852070    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:07:37.863848    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:07:37.863912    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:07:37.875591    9524 logs.go:276] 2 containers: [f94cf076b90f 4e7e6bd3ac01]
	I0729 17:07:37.875651    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:07:37.887991    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:07:37.888053    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:07:37.899822    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:07:37.899889    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:07:37.910708    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:07:37.910771    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:07:37.922455    9524 logs.go:276] 0 containers: []
	W0729 17:07:37.922465    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:07:37.922516    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:07:37.933688    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:07:37.933703    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:07:37.933708    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:07:37.958643    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:07:37.958658    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:07:37.970813    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:07:37.970823    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:07:37.985539    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:07:37.985554    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:07:37.997560    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:07:37.997575    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:07:38.009766    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:07:38.009774    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:07:38.024454    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:07:38.024463    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:07:38.036312    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:07:38.036326    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:07:38.047809    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:07:38.047819    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:07:38.071894    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:07:38.071906    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:07:38.109593    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:07:38.109607    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:07:38.114027    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:07:38.114034    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:07:38.152135    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:07:38.152151    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:07:40.667548    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:45.669835    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:45.669941    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:07:45.681179    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:07:45.681266    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:07:45.692717    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:07:45.692799    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:07:45.704656    9524 logs.go:276] 2 containers: [f94cf076b90f 4e7e6bd3ac01]
	I0729 17:07:45.704727    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:07:45.716126    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:07:45.716205    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:07:45.727611    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:07:45.727686    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:07:45.738772    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:07:45.738847    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:07:45.757611    9524 logs.go:276] 0 containers: []
	W0729 17:07:45.757621    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:07:45.757680    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:07:45.768502    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:07:45.768518    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:07:45.768523    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:07:45.786161    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:07:45.786171    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:07:45.798061    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:07:45.798075    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:07:45.815541    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:07:45.815552    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:07:45.827629    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:07:45.827639    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:07:45.851034    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:07:45.851045    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:07:45.862513    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:07:45.862526    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:07:45.898917    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:07:45.898927    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:07:45.903873    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:07:45.903882    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:07:45.942422    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:07:45.942432    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:07:45.957385    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:07:45.957398    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:07:45.971589    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:07:45.971605    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:07:45.994152    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:07:45.994163    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:07:48.508077    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:53.510268    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:53.510354    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:07:53.521781    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:07:53.521854    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:07:53.534007    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:07:53.534082    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:07:53.545489    9524 logs.go:276] 2 containers: [f94cf076b90f 4e7e6bd3ac01]
	I0729 17:07:53.545564    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:07:53.557024    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:07:53.557097    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:07:53.568621    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:07:53.568692    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:07:53.579733    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:07:53.579804    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:07:53.590887    9524 logs.go:276] 0 containers: []
	W0729 17:07:53.590900    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:07:53.590963    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:07:53.602451    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:07:53.602466    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:07:53.602472    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:07:53.614032    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:07:53.614043    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:07:53.629429    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:07:53.629442    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:07:53.641496    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:07:53.641511    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:07:53.666098    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:07:53.666110    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:07:53.703847    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:07:53.703855    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:07:53.743285    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:07:53.743297    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:07:53.758853    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:07:53.758864    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:07:53.770671    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:07:53.770683    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:07:53.788839    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:07:53.788850    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:07:53.800046    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:07:53.800059    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:07:53.815081    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:07:53.815093    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:07:53.820065    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:07:53.820071    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:07:56.337696    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:01.338291    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:01.338378    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:01.349850    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:08:01.349922    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:01.360976    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:08:01.361048    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:01.372333    9524 logs.go:276] 2 containers: [f94cf076b90f 4e7e6bd3ac01]
	I0729 17:08:01.372413    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:01.386051    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:08:01.386120    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:01.397796    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:08:01.397864    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:01.409882    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:08:01.409960    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:01.421188    9524 logs.go:276] 0 containers: []
	W0729 17:08:01.421198    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:01.421255    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:01.432258    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:08:01.432272    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:01.432278    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:01.436921    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:08:01.436928    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:08:01.451720    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:08:01.451732    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:08:01.468242    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:08:01.468255    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:08:01.483439    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:08:01.483453    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:08:01.495335    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:01.495346    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:01.533743    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:01.533760    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:01.570677    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:08:01.570689    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:08:01.591202    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:08:01.591213    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:08:01.611895    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:08:01.611908    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:08:01.623782    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:08:01.623791    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:08:01.641209    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:01.641219    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:01.663762    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:08:01.663770    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:04.176630    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:09.178964    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:09.179048    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:09.190408    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:08:09.190478    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:09.202051    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:08:09.202120    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:09.214673    9524 logs.go:276] 2 containers: [f94cf076b90f 4e7e6bd3ac01]
	I0729 17:08:09.214743    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:09.231143    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:08:09.231215    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:09.242958    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:08:09.243029    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:09.261942    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:08:09.262019    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:09.273166    9524 logs.go:276] 0 containers: []
	W0729 17:08:09.273179    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:09.273242    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:09.284750    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:08:09.284766    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:08:09.284771    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:08:09.302298    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:08:09.302312    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:08:09.314907    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:09.314917    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:09.338510    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:09.338519    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:09.373780    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:09.373787    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:09.378366    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:08:09.378373    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:08:09.396996    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:08:09.397009    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:08:09.409077    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:08:09.409087    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:08:09.426580    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:08:09.426590    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:09.438747    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:09.438757    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:09.475693    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:08:09.475710    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:08:09.489896    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:08:09.489906    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:08:09.502205    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:08:09.502217    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:08:12.018379    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:17.018999    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:17.019095    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:17.030533    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:08:17.030605    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:17.041697    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:08:17.041772    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:17.054212    9524 logs.go:276] 2 containers: [f94cf076b90f 4e7e6bd3ac01]
	I0729 17:08:17.054289    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:17.072620    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:08:17.072697    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:17.084982    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:08:17.085056    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:17.100960    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:08:17.101031    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:17.112877    9524 logs.go:276] 0 containers: []
	W0729 17:08:17.112888    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:17.112947    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:17.124688    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:08:17.124704    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:17.124710    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:17.130033    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:17.130043    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:17.173328    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:08:17.173344    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:08:17.188918    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:08:17.188934    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:08:17.203623    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:08:17.203636    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:08:17.215646    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:08:17.215658    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:08:17.227404    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:08:17.227415    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:17.238917    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:17.238931    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:17.275163    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:08:17.275171    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:08:17.287280    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:08:17.287292    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:08:17.302262    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:08:17.302276    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:08:17.317460    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:08:17.317475    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:08:17.335587    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:17.335600    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:19.860772    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:24.861156    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:24.861266    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:24.873922    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:08:24.874000    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:24.885218    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:08:24.885293    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:24.896313    9524 logs.go:276] 2 containers: [f94cf076b90f 4e7e6bd3ac01]
	I0729 17:08:24.896384    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:24.908151    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:08:24.908219    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:24.919194    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:08:24.919264    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:24.931072    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:08:24.931142    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:24.942959    9524 logs.go:276] 0 containers: []
	W0729 17:08:24.942969    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:24.943026    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:24.955023    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:08:24.955039    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:08:24.955045    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:08:24.967913    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:08:24.967924    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:08:24.985339    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:24.985352    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:25.022924    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:25.022933    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:25.028039    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:25.028055    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:25.066396    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:08:25.066409    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:08:25.084147    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:08:25.084158    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:08:25.103934    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:08:25.103945    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:08:25.115871    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:25.115885    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:25.140272    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:08:25.140280    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:25.151704    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:08:25.151717    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:08:25.166152    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:08:25.166166    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:08:25.188838    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:08:25.188849    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:08:27.708783    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:32.709119    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:32.709203    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:32.721278    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:08:32.721349    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:32.737772    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:08:32.737848    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:32.752315    9524 logs.go:276] 2 containers: [f94cf076b90f 4e7e6bd3ac01]
	I0729 17:08:32.752384    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:32.763080    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:08:32.763150    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:32.775269    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:08:32.775340    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:32.786944    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:08:32.787010    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:32.797968    9524 logs.go:276] 0 containers: []
	W0729 17:08:32.797979    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:32.798034    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:32.809141    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:08:32.809158    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:08:32.809163    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:08:32.821912    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:08:32.821924    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:08:32.834693    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:08:32.834706    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:08:32.850975    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:08:32.850990    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:08:32.864271    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:32.864279    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:32.889021    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:08:32.889033    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:32.901590    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:32.901604    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:32.963767    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:32.963778    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:32.968976    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:08:32.968983    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:08:32.984976    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:08:32.984987    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:08:32.999640    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:08:32.999654    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:08:33.012217    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:08:33.012227    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:08:33.029952    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:33.029962    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:35.570286    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:40.570777    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:40.570892    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:40.583086    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:08:40.583156    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:40.593986    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:08:40.594062    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:40.605357    9524 logs.go:276] 2 containers: [f94cf076b90f 4e7e6bd3ac01]
	I0729 17:08:40.605435    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:40.616806    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:08:40.616879    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:40.628388    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:08:40.628457    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:40.639932    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:08:40.640006    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:40.653164    9524 logs.go:276] 0 containers: []
	W0729 17:08:40.653176    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:40.653240    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:40.663993    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:08:40.664009    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:08:40.664013    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:08:40.676305    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:40.676315    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:40.681251    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:40.681260    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:40.719590    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:08:40.719603    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:08:40.732173    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:08:40.732184    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:08:40.753663    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:08:40.753680    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:08:40.766592    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:40.766604    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:40.791322    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:08:40.791337    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:40.803268    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:40.803279    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:40.839734    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:08:40.839744    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:08:40.855633    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:08:40.855643    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:08:40.869271    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:08:40.869279    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:08:40.880822    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:08:40.880831    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:08:43.407930    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:48.409900    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:48.409972    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:48.422425    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:08:48.422493    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:48.433623    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:08:48.433698    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:48.445850    9524 logs.go:276] 2 containers: [f94cf076b90f 4e7e6bd3ac01]
	I0729 17:08:48.446001    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:48.457213    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:08:48.457275    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:48.468371    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:08:48.468447    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:48.482212    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:08:48.482280    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:48.493562    9524 logs.go:276] 0 containers: []
	W0729 17:08:48.493572    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:48.493632    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:48.505395    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:08:48.505409    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:08:48.505416    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:48.518318    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:08:48.518328    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:08:48.531155    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:08:48.531165    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:08:48.546854    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:08:48.546866    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:08:48.559503    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:08:48.559513    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:08:48.577531    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:08:48.577548    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:08:48.590172    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:08:48.590181    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:08:48.605573    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:48.605585    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:48.631393    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:48.631402    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:48.669688    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:48.669705    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:48.674899    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:48.674905    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:48.710779    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:08:48.710791    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:08:48.725031    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:08:48.725043    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:08:51.241157    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:56.243376    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:56.243464    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:56.255922    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:08:56.255985    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:56.267341    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:08:56.267413    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:56.279458    9524 logs.go:276] 4 containers: [1dbf3e1377bf 1501a4917fcb f94cf076b90f 4e7e6bd3ac01]
	I0729 17:08:56.279538    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:56.290997    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:08:56.291069    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:56.302391    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:08:56.302463    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:56.313816    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:08:56.313891    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:56.325286    9524 logs.go:276] 0 containers: []
	W0729 17:08:56.325299    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:56.325361    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:56.336266    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:08:56.336284    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:08:56.336290    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:08:56.351576    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:08:56.351586    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:08:56.366029    9524 logs.go:123] Gathering logs for coredns [1501a4917fcb] ...
	I0729 17:08:56.366037    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1501a4917fcb"
	I0729 17:08:56.377748    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:08:56.377759    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:08:56.395961    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:56.395971    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:56.422523    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:56.422533    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:56.428698    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:08:56.428708    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:08:56.440760    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:56.440772    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:56.479033    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:08:56.479046    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:08:56.492714    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:08:56.492725    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:08:56.506417    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:08:56.506430    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:56.518726    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:56.518738    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:56.558784    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:08:56.558797    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:08:56.574707    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:08:56.574718    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:08:56.586775    9524 logs.go:123] Gathering logs for coredns [1dbf3e1377bf] ...
	I0729 17:08:56.586784    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbf3e1377bf"
	I0729 17:08:59.105928    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:04.106288    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:04.106365    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:04.118649    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:09:04.118707    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:04.130481    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:09:04.130517    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:04.142607    9524 logs.go:276] 4 containers: [1dbf3e1377bf 1501a4917fcb f94cf076b90f 4e7e6bd3ac01]
	I0729 17:09:04.142671    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:04.154430    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:09:04.154484    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:04.166243    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:09:04.166306    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:04.177701    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:09:04.177770    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:04.189026    9524 logs.go:276] 0 containers: []
	W0729 17:09:04.189039    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:04.189106    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:04.200924    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:09:04.200942    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:09:04.200947    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:09:04.216792    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:09:04.216804    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:09:04.229800    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:09:04.229813    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:09:04.245507    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:09:04.245523    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:09:04.264555    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:09:04.264564    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:09:04.277829    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:09:04.277840    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:04.292174    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:04.292182    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:04.331678    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:04.331688    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:04.337057    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:04.337067    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:04.390604    9524 logs.go:123] Gathering logs for coredns [1501a4917fcb] ...
	I0729 17:09:04.390615    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1501a4917fcb"
	I0729 17:09:04.403526    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:09:04.403542    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:09:04.416528    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:09:04.416543    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:09:04.429108    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:04.429121    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:09:04.454229    9524 logs.go:123] Gathering logs for coredns [1dbf3e1377bf] ...
	I0729 17:09:04.454241    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbf3e1377bf"
	I0729 17:09:04.470094    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:09:04.470105    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:09:06.987316    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:11.987520    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:11.987584    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:12.000219    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:09:12.000271    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:12.011273    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:09:12.011323    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:12.022214    9524 logs.go:276] 4 containers: [1dbf3e1377bf 1501a4917fcb f94cf076b90f 4e7e6bd3ac01]
	I0729 17:09:12.022264    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:12.035140    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:09:12.035198    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:12.046940    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:09:12.047001    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:12.058660    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:09:12.058722    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:12.070145    9524 logs.go:276] 0 containers: []
	W0729 17:09:12.070153    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:12.070228    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:12.083571    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:09:12.083587    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:12.083592    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:12.088508    9524 logs.go:123] Gathering logs for coredns [1dbf3e1377bf] ...
	I0729 17:09:12.088522    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbf3e1377bf"
	I0729 17:09:12.101278    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:09:12.101289    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:09:12.113716    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:09:12.113728    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:09:12.135974    9524 logs.go:123] Gathering logs for coredns [1501a4917fcb] ...
	I0729 17:09:12.135985    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1501a4917fcb"
	I0729 17:09:12.148022    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:09:12.148033    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:09:12.163952    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:09:12.163964    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:09:12.180756    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:09:12.180766    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:09:12.199301    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:12.199311    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:09:12.225843    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:12.225856    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:12.265261    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:09:12.265269    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:09:12.279828    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:09:12.279841    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:09:12.298851    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:12.298863    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:12.336417    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:09:12.336428    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:09:12.347984    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:09:12.347995    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:14.860511    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:19.862729    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:19.862888    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:19.874415    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:09:19.874485    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:19.885674    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:09:19.885741    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:19.900613    9524 logs.go:276] 4 containers: [1dbf3e1377bf 1501a4917fcb f94cf076b90f 4e7e6bd3ac01]
	I0729 17:09:19.900683    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:19.911806    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:09:19.911875    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:19.923948    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:09:19.924015    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:19.935911    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:09:19.935982    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:19.946629    9524 logs.go:276] 0 containers: []
	W0729 17:09:19.946641    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:19.946700    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:19.958150    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:09:19.958166    9524 logs.go:123] Gathering logs for coredns [1501a4917fcb] ...
	I0729 17:09:19.958171    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1501a4917fcb"
	I0729 17:09:19.970913    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:09:19.970924    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:09:19.987204    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:09:19.987214    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:09:19.999696    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:09:19.999709    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:20.012607    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:20.012621    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:20.018155    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:09:20.018166    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:09:20.032929    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:20.032942    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:20.073070    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:09:20.073081    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:09:20.089417    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:09:20.089428    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:09:20.102677    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:20.102688    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:09:20.128641    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:09:20.128654    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:09:20.146919    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:20.146931    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:20.186443    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:09:20.186460    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:09:20.201267    9524 logs.go:123] Gathering logs for coredns [1dbf3e1377bf] ...
	I0729 17:09:20.201277    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbf3e1377bf"
	I0729 17:09:20.213264    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:09:20.213275    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:09:22.728740    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:27.730003    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:27.730159    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:27.746858    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:09:27.746914    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:27.758677    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:09:27.758733    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:27.773640    9524 logs.go:276] 4 containers: [1dbf3e1377bf 1501a4917fcb f94cf076b90f 4e7e6bd3ac01]
	I0729 17:09:27.773698    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:27.785426    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:09:27.785480    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:27.797629    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:09:27.797702    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:27.809848    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:09:27.809916    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:27.821163    9524 logs.go:276] 0 containers: []
	W0729 17:09:27.821174    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:27.821237    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:27.833240    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:09:27.833258    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:09:27.833263    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:09:27.845720    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:09:27.845728    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:27.858355    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:09:27.858368    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:09:27.881638    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:09:27.881653    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:09:27.894106    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:09:27.894119    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:09:27.909468    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:09:27.909476    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:09:27.926253    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:09:27.926266    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:09:27.938807    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:27.938818    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:27.976621    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:27.976637    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:27.981858    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:09:27.981873    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:09:27.997546    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:09:27.997558    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:09:28.013347    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:28.013359    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:09:28.039889    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:28.039905    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:28.079431    9524 logs.go:123] Gathering logs for coredns [1dbf3e1377bf] ...
	I0729 17:09:28.079444    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbf3e1377bf"
	I0729 17:09:28.093858    9524 logs.go:123] Gathering logs for coredns [1501a4917fcb] ...
	I0729 17:09:28.093869    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1501a4917fcb"
	I0729 17:09:30.608090    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:35.610325    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:35.610404    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:35.621885    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:09:35.621952    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:35.633691    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:09:35.633756    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:35.646127    9524 logs.go:276] 4 containers: [1dbf3e1377bf 1501a4917fcb f94cf076b90f 4e7e6bd3ac01]
	I0729 17:09:35.646198    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:35.657789    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:09:35.657889    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:35.669392    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:09:35.669454    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:35.681250    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:09:35.681311    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:35.692871    9524 logs.go:276] 0 containers: []
	W0729 17:09:35.692880    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:35.692933    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:35.704699    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:09:35.704716    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:09:35.704723    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:09:35.720345    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:09:35.720359    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:09:35.735123    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:35.735133    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:35.740122    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:09:35.740130    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:09:35.755573    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:09:35.755582    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:09:35.779941    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:09:35.779951    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:35.792747    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:35.792755    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:35.833356    9524 logs.go:123] Gathering logs for coredns [1501a4917fcb] ...
	I0729 17:09:35.833367    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1501a4917fcb"
	I0729 17:09:35.846071    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:09:35.846083    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:09:35.863842    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:35.863853    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:09:35.888774    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:09:35.888783    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:09:35.906244    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:35.906257    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:35.944079    9524 logs.go:123] Gathering logs for coredns [1dbf3e1377bf] ...
	I0729 17:09:35.944097    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbf3e1377bf"
	I0729 17:09:35.956587    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:09:35.956598    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:09:35.968847    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:09:35.968859    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:09:38.482246    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:43.484613    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:43.484737    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:43.501774    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:09:43.501854    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:43.515532    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:09:43.515607    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:43.528888    9524 logs.go:276] 4 containers: [1dbf3e1377bf 1501a4917fcb f94cf076b90f 4e7e6bd3ac01]
	I0729 17:09:43.528963    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:43.540287    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:09:43.540356    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:43.551883    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:09:43.551951    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:43.568356    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:09:43.568422    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:43.579294    9524 logs.go:276] 0 containers: []
	W0729 17:09:43.579307    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:43.579369    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:43.590774    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:09:43.590794    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:43.590799    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:43.595991    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:43.596003    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:43.633766    9524 logs.go:123] Gathering logs for coredns [1501a4917fcb] ...
	I0729 17:09:43.633778    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1501a4917fcb"
	I0729 17:09:43.646294    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:43.646306    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:09:43.672595    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:09:43.672620    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:09:43.688176    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:09:43.688188    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:09:43.701111    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:09:43.701122    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:43.715069    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:09:43.715084    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:09:43.733729    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:09:43.733744    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:09:43.747191    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:09:43.747204    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:09:43.771559    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:43.771574    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:43.808631    9524 logs.go:123] Gathering logs for coredns [1dbf3e1377bf] ...
	I0729 17:09:43.808641    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbf3e1377bf"
	I0729 17:09:43.821381    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:09:43.821388    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:09:43.837404    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:09:43.837414    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:09:43.849666    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:09:43.849677    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:09:46.363592    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:51.365744    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:51.365912    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:51.383289    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:09:51.383388    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:51.401861    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:09:51.401946    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:51.416535    9524 logs.go:276] 4 containers: [1dbf3e1377bf 1501a4917fcb f94cf076b90f 4e7e6bd3ac01]
	I0729 17:09:51.416615    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:51.440382    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:09:51.440457    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:51.454562    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:09:51.454633    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:51.468954    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:09:51.469008    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:51.487604    9524 logs.go:276] 0 containers: []
	W0729 17:09:51.487615    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:51.487662    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:51.499386    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:09:51.499403    9524 logs.go:123] Gathering logs for coredns [1501a4917fcb] ...
	I0729 17:09:51.499408    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1501a4917fcb"
	I0729 17:09:51.514485    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:09:51.514495    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:09:51.534417    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:51.534430    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:09:51.561296    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:09:51.561325    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:51.583176    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:09:51.583197    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:09:51.632849    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:51.632877    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:51.684619    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:51.684649    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:51.694700    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:09:51.694712    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:09:51.712107    9524 logs.go:123] Gathering logs for coredns [1dbf3e1377bf] ...
	I0729 17:09:51.712120    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbf3e1377bf"
	I0729 17:09:51.727854    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:09:51.727865    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:09:51.741078    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:51.741089    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:51.781363    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:09:51.781372    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:09:51.799822    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:09:51.799835    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:09:51.813509    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:09:51.813521    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:09:51.832211    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:09:51.832228    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:09:54.346904    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:59.347636    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:59.347788    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:59.359455    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:09:59.359530    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:59.369855    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:09:59.369933    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:59.380397    9524 logs.go:276] 4 containers: [1dbf3e1377bf 1501a4917fcb f94cf076b90f 4e7e6bd3ac01]
	I0729 17:09:59.380464    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:59.391029    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:09:59.391090    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:59.401421    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:09:59.401486    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:59.411927    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:09:59.411987    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:59.422094    9524 logs.go:276] 0 containers: []
	W0729 17:09:59.422109    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:59.422164    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:59.432227    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:09:59.432243    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:09:59.432248    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:09:59.446312    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:09:59.446326    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:09:59.457553    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:59.457564    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:59.495741    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:59.495750    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:59.531124    9524 logs.go:123] Gathering logs for coredns [1dbf3e1377bf] ...
	I0729 17:09:59.531136    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbf3e1377bf"
	I0729 17:09:59.543878    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:09:59.543889    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:09:59.555535    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:09:59.555549    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:09:59.568731    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:09:59.568745    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:09:59.586690    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:09:59.586703    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:09:59.598189    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:09:59.598202    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:59.610069    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:59.610079    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:59.615565    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:09:59.615574    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:09:59.630627    9524 logs.go:123] Gathering logs for coredns [1501a4917fcb] ...
	I0729 17:09:59.630641    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1501a4917fcb"
	I0729 17:09:59.642645    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:09:59.642657    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:09:59.657803    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:59.657813    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:10:02.185146    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:10:07.187470    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:10:07.187718    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:10:07.204993    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:10:07.205089    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:10:07.220153    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:10:07.220239    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:10:07.232611    9524 logs.go:276] 4 containers: [1dbf3e1377bf 1501a4917fcb f94cf076b90f 4e7e6bd3ac01]
	I0729 17:10:07.232696    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:10:07.243005    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:10:07.243083    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:10:07.254143    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:10:07.254211    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:10:07.265249    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:10:07.265316    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:10:07.275781    9524 logs.go:276] 0 containers: []
	W0729 17:10:07.275792    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:10:07.275846    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:10:07.286315    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:10:07.286334    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:10:07.286339    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:10:07.324005    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:10:07.324016    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:10:07.338478    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:10:07.338489    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:10:07.352852    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:10:07.352865    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:10:07.364457    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:10:07.364469    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:10:07.379783    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:10:07.379797    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:10:07.391720    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:10:07.391735    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:10:07.409219    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:10:07.409231    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:10:07.414310    9524 logs.go:123] Gathering logs for coredns [1dbf3e1377bf] ...
	I0729 17:10:07.414317    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbf3e1377bf"
	I0729 17:10:07.426249    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:10:07.426262    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:10:07.438428    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:10:07.438438    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:10:07.477557    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:10:07.477567    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:10:07.489823    9524 logs.go:123] Gathering logs for coredns [1501a4917fcb] ...
	I0729 17:10:07.489835    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1501a4917fcb"
	I0729 17:10:07.501256    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:10:07.501265    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:10:07.525125    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:10:07.525138    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:10:10.039089    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:10:15.041418    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:10:15.041577    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:10:15.055882    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:10:15.055960    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:10:15.067083    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:10:15.067153    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:10:15.082242    9524 logs.go:276] 4 containers: [1dbf3e1377bf 1501a4917fcb f94cf076b90f 4e7e6bd3ac01]
	I0729 17:10:15.082322    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:10:15.092774    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:10:15.092846    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:10:15.103750    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:10:15.103824    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:10:15.114683    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:10:15.114752    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:10:15.124580    9524 logs.go:276] 0 containers: []
	W0729 17:10:15.124595    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:10:15.124654    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:10:15.135047    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:10:15.135064    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:10:15.135069    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:10:15.150437    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:10:15.150450    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:10:15.162881    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:10:15.162892    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:10:15.175841    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:10:15.175850    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:10:15.180281    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:10:15.180288    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:10:15.215699    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:10:15.215708    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:10:15.230120    9524 logs.go:123] Gathering logs for coredns [1dbf3e1377bf] ...
	I0729 17:10:15.230132    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbf3e1377bf"
	I0729 17:10:15.241380    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:10:15.241391    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:10:15.256114    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:10:15.256126    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:10:15.271713    9524 logs.go:123] Gathering logs for coredns [1501a4917fcb] ...
	I0729 17:10:15.271725    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1501a4917fcb"
	I0729 17:10:15.284850    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:10:15.284859    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:10:15.303316    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:10:15.303331    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:10:15.339714    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:10:15.339722    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:10:15.353521    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:10:15.353534    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:10:15.365509    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:10:15.365523    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:10:17.892238    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:10:22.894487    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:10:22.894734    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:10:22.920732    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:10:22.920846    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:10:22.937828    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:10:22.937906    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:10:22.951631    9524 logs.go:276] 4 containers: [1dbf3e1377bf 1501a4917fcb f94cf076b90f 4e7e6bd3ac01]
	I0729 17:10:22.951704    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:10:22.963513    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:10:22.963578    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:10:22.974896    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:10:22.974965    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:10:22.985922    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:10:22.985990    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:10:22.996416    9524 logs.go:276] 0 containers: []
	W0729 17:10:22.996429    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:10:22.996488    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:10:23.007080    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:10:23.007097    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:10:23.007103    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:10:23.044009    9524 logs.go:123] Gathering logs for coredns [1501a4917fcb] ...
	I0729 17:10:23.044018    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1501a4917fcb"
	I0729 17:10:23.055729    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:10:23.055740    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:10:23.070383    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:10:23.070397    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:10:23.089779    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:10:23.089792    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:10:23.101716    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:10:23.101727    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:10:23.116346    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:10:23.116358    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:10:23.130766    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:10:23.130779    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:10:23.142736    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:10:23.142749    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:10:23.154659    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:10:23.154671    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:10:23.168891    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:10:23.168902    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:10:23.173916    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:10:23.173925    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:10:23.214732    9524 logs.go:123] Gathering logs for coredns [1dbf3e1377bf] ...
	I0729 17:10:23.214743    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbf3e1377bf"
	I0729 17:10:23.226621    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:10:23.226631    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:10:23.238800    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:10:23.238810    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:10:25.764991    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:10:30.767452    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:10:30.767934    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:10:30.809067    9524 logs.go:276] 1 containers: [80dd326861ff]
	I0729 17:10:30.809202    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:10:30.829412    9524 logs.go:276] 1 containers: [eed3acf2d926]
	I0729 17:10:30.829510    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:10:30.843917    9524 logs.go:276] 4 containers: [1dbf3e1377bf 1501a4917fcb f94cf076b90f 4e7e6bd3ac01]
	I0729 17:10:30.843998    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:10:30.857960    9524 logs.go:276] 1 containers: [ab12392f1d97]
	I0729 17:10:30.858024    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:10:30.868915    9524 logs.go:276] 1 containers: [b268d01093b4]
	I0729 17:10:30.868981    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:10:30.879887    9524 logs.go:276] 1 containers: [e9de93f280c4]
	I0729 17:10:30.879951    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:10:30.889765    9524 logs.go:276] 0 containers: []
	W0729 17:10:30.889776    9524 logs.go:278] No container was found matching "kindnet"
	I0729 17:10:30.889825    9524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:10:30.900705    9524 logs.go:276] 1 containers: [34ac75df0db5]
	I0729 17:10:30.900723    9524 logs.go:123] Gathering logs for kubelet ...
	I0729 17:10:30.900728    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:10:30.936679    9524 logs.go:123] Gathering logs for dmesg ...
	I0729 17:10:30.936688    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:10:30.941265    9524 logs.go:123] Gathering logs for coredns [1dbf3e1377bf] ...
	I0729 17:10:30.941274    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbf3e1377bf"
	I0729 17:10:30.953665    9524 logs.go:123] Gathering logs for kube-proxy [b268d01093b4] ...
	I0729 17:10:30.953678    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b268d01093b4"
	I0729 17:10:30.966135    9524 logs.go:123] Gathering logs for kube-controller-manager [e9de93f280c4] ...
	I0729 17:10:30.966149    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9de93f280c4"
	I0729 17:10:30.984270    9524 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:10:30.984283    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:10:31.018957    9524 logs.go:123] Gathering logs for coredns [f94cf076b90f] ...
	I0729 17:10:31.018970    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94cf076b90f"
	I0729 17:10:31.031685    9524 logs.go:123] Gathering logs for storage-provisioner [34ac75df0db5] ...
	I0729 17:10:31.031696    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ac75df0db5"
	I0729 17:10:31.043582    9524 logs.go:123] Gathering logs for Docker ...
	I0729 17:10:31.043591    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:10:31.067409    9524 logs.go:123] Gathering logs for kube-apiserver [80dd326861ff] ...
	I0729 17:10:31.067419    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80dd326861ff"
	I0729 17:10:31.084288    9524 logs.go:123] Gathering logs for coredns [1501a4917fcb] ...
	I0729 17:10:31.084298    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1501a4917fcb"
	I0729 17:10:31.100218    9524 logs.go:123] Gathering logs for coredns [4e7e6bd3ac01] ...
	I0729 17:10:31.100229    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e6bd3ac01"
	I0729 17:10:31.113608    9524 logs.go:123] Gathering logs for container status ...
	I0729 17:10:31.113621    9524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:10:31.125781    9524 logs.go:123] Gathering logs for etcd [eed3acf2d926] ...
	I0729 17:10:31.125791    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed3acf2d926"
	I0729 17:10:31.143107    9524 logs.go:123] Gathering logs for kube-scheduler [ab12392f1d97] ...
	I0729 17:10:31.143117    9524 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab12392f1d97"
	I0729 17:10:33.660241    9524 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:10:38.663097    9524 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:10:38.669335    9524 out.go:177] 
	W0729 17:10:38.673467    9524 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 17:10:38.673515    9524 out.go:239] * 
	* 
	W0729 17:10:38.676202    9524 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:10:38.691278    9524 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-449000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-29 17:10:38.821704 -0700 PDT m=+1396.128424168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-449000 -n running-upgrade-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-449000 -n running-upgrade-449000: exit status 2 (15.638814542s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-449000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-561000 sudo cat              | cilium-561000             | jenkins | v1.33.1 | 29 Jul 24 16:59 PDT |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-561000 sudo cat              | cilium-561000             | jenkins | v1.33.1 | 29 Jul 24 16:59 PDT |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-561000 sudo                  | cilium-561000             | jenkins | v1.33.1 | 29 Jul 24 16:59 PDT |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-561000 sudo                  | cilium-561000             | jenkins | v1.33.1 | 29 Jul 24 16:59 PDT |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-561000 sudo                  | cilium-561000             | jenkins | v1.33.1 | 29 Jul 24 16:59 PDT |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-561000 sudo find             | cilium-561000             | jenkins | v1.33.1 | 29 Jul 24 16:59 PDT |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-561000 sudo crio             | cilium-561000             | jenkins | v1.33.1 | 29 Jul 24 16:59 PDT |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-561000                       | cilium-561000             | jenkins | v1.33.1 | 29 Jul 24 16:59 PDT | 29 Jul 24 16:59 PDT |
	| start   | -p kubernetes-upgrade-457000           | kubernetes-upgrade-457000 | jenkins | v1.33.1 | 29 Jul 24 16:59 PDT |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                         |                           |         |         |                     |                     |
	| delete  | -p offline-docker-532000               | offline-docker-532000     | jenkins | v1.33.1 | 29 Jul 24 17:00 PDT | 29 Jul 24 17:00 PDT |
	| stop    | -p kubernetes-upgrade-457000           | kubernetes-upgrade-457000 | jenkins | v1.33.1 | 29 Jul 24 17:00 PDT | 29 Jul 24 17:00 PDT |
	| start   | -p stopped-upgrade-208000              | minikube                  | jenkins | v1.26.0 | 29 Jul 24 17:00 PDT | 29 Jul 24 17:00 PDT |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-457000           | kubernetes-upgrade-457000 | jenkins | v1.33.1 | 29 Jul 24 17:00 PDT |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0    |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                         |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-457000           | kubernetes-upgrade-457000 | jenkins | v1.33.1 | 29 Jul 24 17:00 PDT | 29 Jul 24 17:00 PDT |
	| start   | -p running-upgrade-449000              | minikube                  | jenkins | v1.26.0 | 29 Jul 24 17:00 PDT | 29 Jul 24 17:01 PDT |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                      |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-208000 stop            | minikube                  | jenkins | v1.26.0 | 29 Jul 24 17:00 PDT | 29 Jul 24 17:01 PDT |
	| start   | -p stopped-upgrade-208000              | stopped-upgrade-208000    | jenkins | v1.33.1 | 29 Jul 24 17:01 PDT |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                         |                           |         |         |                     |                     |
	| start   | -p running-upgrade-449000              | running-upgrade-449000    | jenkins | v1.33.1 | 29 Jul 24 17:01 PDT |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                         |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-208000              | stopped-upgrade-208000    | jenkins | v1.33.1 | 29 Jul 24 17:10 PDT | 29 Jul 24 17:10 PDT |
	| start   | -p pause-246000 --memory=2048          | pause-246000              | jenkins | v1.33.1 | 29 Jul 24 17:10 PDT |                     |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=qemu2              |                           |         |         |                     |                     |
	| delete  | -p pause-246000                        | pause-246000              | jenkins | v1.33.1 | 29 Jul 24 17:10 PDT | 29 Jul 24 17:10 PDT |
	| start   | -p NoKubernetes-757000                 | NoKubernetes-757000       | jenkins | v1.33.1 | 29 Jul 24 17:10 PDT |                     |
	|         | --no-kubernetes                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20              |                           |         |         |                     |                     |
	|         | --driver=qemu2                         |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-757000                 | NoKubernetes-757000       | jenkins | v1.33.1 | 29 Jul 24 17:10 PDT |                     |
	|         | --driver=qemu2                         |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-757000                 | NoKubernetes-757000       | jenkins | v1.33.1 | 29 Jul 24 17:10 PDT |                     |
	|         | --no-kubernetes --driver=qemu2         |                           |         |         |                     |                     |
	|         |                                        |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-757000                 | NoKubernetes-757000       | jenkins | v1.33.1 | 29 Jul 24 17:10 PDT |                     |
	|         | --no-kubernetes --driver=qemu2         |                           |         |         |                     |                     |
	|         |                                        |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:10:50
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:10:50.071117    9847 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:10:50.071240    9847 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:10:50.071242    9847 out.go:304] Setting ErrFile to fd 2...
	I0729 17:10:50.071244    9847 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:10:50.071384    9847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:10:50.072408    9847 out.go:298] Setting JSON to false
	I0729 17:10:50.089514    9847 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6017,"bootTime":1722292233,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:10:50.089596    9847 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:10:50.094664    9847 out.go:177] * [NoKubernetes-757000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:10:50.100741    9847 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:10:50.100784    9847 notify.go:220] Checking for updates...
	I0729 17:10:50.107666    9847 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:10:50.109089    9847 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:10:50.111673    9847 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:10:50.114696    9847 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:10:50.117736    9847 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:10:50.120999    9847 config.go:182] Loaded profile config "NoKubernetes-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0729 17:10:50.121200    9847 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0729 17:10:50.121231    9847 start.go:1783] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0729 17:10:50.121237    9847 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:10:50.125621    9847 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 17:10:50.132665    9847 start.go:297] selected driver: qemu2
	I0729 17:10:50.132669    9847 start.go:901] validating driver "qemu2" against &{Name:NoKubernetes-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v0.0.0 ClusterName:NoKubernetes-757000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:10:50.132719    9847 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:10:50.132743    9847 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0729 17:10:50.134910    9847 cni.go:84] Creating CNI manager for ""
	I0729 17:10:50.134923    9847 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 17:10:50.134941    9847 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0729 17:10:50.134968    9847 start.go:340] cluster config:
	{Name:NoKubernetes-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-757000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:10:50.138231    9847 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:10:50.146601    9847 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-757000
	I0729 17:10:50.150603    9847 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime docker
	W0729 17:10:50.209471    9847 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-arm64.tar.lz4 status code: 404
	I0729 17:10:50.209565    9847 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/NoKubernetes-757000/config.json ...
	I0729 17:10:50.210152    9847 start.go:360] acquireMachinesLock for NoKubernetes-757000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:10:50.210193    9847 start.go:364] duration metric: took 34.833µs to acquireMachinesLock for "NoKubernetes-757000"
	I0729 17:10:50.210201    9847 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:10:50.210206    9847 fix.go:54] fixHost starting: 
	I0729 17:10:50.210338    9847 fix.go:112] recreateIfNeeded on NoKubernetes-757000: state=Stopped err=<nil>
	W0729 17:10:50.210345    9847 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:10:50.219032    9847 out.go:177] * Restarting existing qemu2 VM for "NoKubernetes-757000" ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-07-30 00:00:52 UTC, ends at Tue 2024-07-30 00:10:54 UTC. --
	Jul 30 00:10:39 running-upgrade-449000 dockerd[5077]: time="2024-07-30T00:10:39.520407987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 30 00:10:39 running-upgrade-449000 dockerd[5077]: time="2024-07-30T00:10:39.520452487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 30 00:10:39 running-upgrade-449000 dockerd[5077]: time="2024-07-30T00:10:39.520475904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 30 00:10:39 running-upgrade-449000 dockerd[5077]: time="2024-07-30T00:10:39.520575363Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3c73c9b2688a04243e251906d71c6035390fc92bae81a80bb0242d9cd6c1b2d2 pid=21601 runtime=io.containerd.runc.v2
	Jul 30 00:10:39 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:39Z" level=error msg="ContainerStats resp: {0x4000416900 linux}"
	Jul 30 00:10:40 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:40Z" level=error msg="ContainerStats resp: {0x400050d3c0 linux}"
	Jul 30 00:10:40 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:40Z" level=error msg="ContainerStats resp: {0x400050d500 linux}"
	Jul 30 00:10:40 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:40Z" level=error msg="ContainerStats resp: {0x4000578980 linux}"
	Jul 30 00:10:40 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:40Z" level=error msg="ContainerStats resp: {0x400050dd40 linux}"
	Jul 30 00:10:40 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:40Z" level=error msg="ContainerStats resp: {0x400050de80 linux}"
	Jul 30 00:10:41 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:41Z" level=error msg="ContainerStats resp: {0x4000579fc0 linux}"
	Jul 30 00:10:41 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:41Z" level=error msg="ContainerStats resp: {0x4000430840 linux}"
	Jul 30 00:10:41 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:41Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 30 00:10:46 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:46Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 30 00:10:51 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:51Z" level=error msg="ContainerStats resp: {0x4000430fc0 linux}"
	Jul 30 00:10:51 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:51Z" level=error msg="ContainerStats resp: {0x4000431440 linux}"
	Jul 30 00:10:51 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 30 00:10:52 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:52Z" level=error msg="ContainerStats resp: {0x400050d780 linux}"
	Jul 30 00:10:53 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:53Z" level=error msg="ContainerStats resp: {0x400050ce00 linux}"
	Jul 30 00:10:53 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:53Z" level=error msg="ContainerStats resp: {0x400050d240 linux}"
	Jul 30 00:10:53 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:53Z" level=error msg="ContainerStats resp: {0x400050d800 linux}"
	Jul 30 00:10:53 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:53Z" level=error msg="ContainerStats resp: {0x400050dfc0 linux}"
	Jul 30 00:10:53 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:53Z" level=error msg="ContainerStats resp: {0x4000578380 linux}"
	Jul 30 00:10:53 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:53Z" level=error msg="ContainerStats resp: {0x4000a29f40 linux}"
	Jul 30 00:10:53 running-upgrade-449000 cri-dockerd[3741]: time="2024-07-30T00:10:53Z" level=error msg="ContainerStats resp: {0x400077c400 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	3c73c9b2688a0       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   8b1ec79937163
	cb22841fac70a       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   0181b520d1859
	1dbf3e1377bfe       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   0181b520d1859
	1501a4917fcb9       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   8b1ec79937163
	34ac75df0db54       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   5307b0d655e4f
	b268d01093b40       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   53c40f491d205
	ab12392f1d97f       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   c5023413ed66b
	e9de93f280c49       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   b0dbff5c8033f
	eed3acf2d9265       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   9d1e056e807a1
	80dd326861ffc       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   a78fa210b0675
	
	
	==> coredns [1501a4917fcb] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5891036710677911280.4360861171315078644. HINFO: read udp 10.244.0.2:51879->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5891036710677911280.4360861171315078644. HINFO: read udp 10.244.0.2:59912->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5891036710677911280.4360861171315078644. HINFO: read udp 10.244.0.2:43861->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5891036710677911280.4360861171315078644. HINFO: read udp 10.244.0.2:46152->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5891036710677911280.4360861171315078644. HINFO: read udp 10.244.0.2:36399->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5891036710677911280.4360861171315078644. HINFO: read udp 10.244.0.2:56243->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5891036710677911280.4360861171315078644. HINFO: read udp 10.244.0.2:39317->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5891036710677911280.4360861171315078644. HINFO: read udp 10.244.0.2:40171->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5891036710677911280.4360861171315078644. HINFO: read udp 10.244.0.2:40063->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [1dbf3e1377bf] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2331308772964892804.2543609101919967951. HINFO: read udp 10.244.0.3:54703->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2331308772964892804.2543609101919967951. HINFO: read udp 10.244.0.3:56644->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2331308772964892804.2543609101919967951. HINFO: read udp 10.244.0.3:37336->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2331308772964892804.2543609101919967951. HINFO: read udp 10.244.0.3:37455->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2331308772964892804.2543609101919967951. HINFO: read udp 10.244.0.3:55778->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2331308772964892804.2543609101919967951. HINFO: read udp 10.244.0.3:51493->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2331308772964892804.2543609101919967951. HINFO: read udp 10.244.0.3:44895->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2331308772964892804.2543609101919967951. HINFO: read udp 10.244.0.3:53693->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2331308772964892804.2543609101919967951. HINFO: read udp 10.244.0.3:46161->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2331308772964892804.2543609101919967951. HINFO: read udp 10.244.0.3:56086->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3c73c9b2688a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9096870362252653615.7073003310951560642. HINFO: read udp 10.244.0.2:53622->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9096870362252653615.7073003310951560642. HINFO: read udp 10.244.0.2:51910->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9096870362252653615.7073003310951560642. HINFO: read udp 10.244.0.2:55752->10.0.2.3:53: i/o timeout
	
	
	==> coredns [cb22841fac70] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7278389029544427570.1833569706130075039. HINFO: read udp 10.244.0.3:53975->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7278389029544427570.1833569706130075039. HINFO: read udp 10.244.0.3:51162->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7278389029544427570.1833569706130075039. HINFO: read udp 10.244.0.3:50051->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-449000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-449000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=running-upgrade-449000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T17_06_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:06:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-449000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:10:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:06:37 +0000   Tue, 30 Jul 2024 00:06:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:06:37 +0000   Tue, 30 Jul 2024 00:06:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:06:37 +0000   Tue, 30 Jul 2024 00:06:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:06:37 +0000   Tue, 30 Jul 2024 00:06:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-449000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 7980c0ad2c4f4f81925f8e9dfdd4b112
	  System UUID:                7980c0ad2c4f4f81925f8e9dfdd4b112
	  Boot ID:                    7323b5fe-17e3-4f2f-8970-9a9f49a3f9e2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-8gbzq                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-kc4f2                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-449000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-449000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-449000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-proxy-q5zg2                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-running-upgrade-449000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x5 over 4m22s)  kubelet          Node running-upgrade-449000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-449000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x3 over 4m22s)  kubelet          Node running-upgrade-449000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-449000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-449000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-449000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-449000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-449000 event: Registered Node running-upgrade-449000 in Controller
	
	
	==> dmesg <==
	[  +0.082504] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.215623] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.080026] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.464066] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +9.118144] systemd-fstab-generator[1924]: Ignoring "noauto" for root device
	[ +14.033787] kauditd_printk_skb: 86 callbacks suppressed
	[  +0.984136] systemd-fstab-generator[2548]: Ignoring "noauto" for root device
	[  +0.161987] systemd-fstab-generator[2581]: Ignoring "noauto" for root device
	[  +0.101223] systemd-fstab-generator[2594]: Ignoring "noauto" for root device
	[  +0.110144] systemd-fstab-generator[2646]: Ignoring "noauto" for root device
	[  +5.242568] kauditd_printk_skb: 16 callbacks suppressed
	[ +11.491336] systemd-fstab-generator[3698]: Ignoring "noauto" for root device
	[  +0.082781] systemd-fstab-generator[3709]: Ignoring "noauto" for root device
	[  +0.081510] systemd-fstab-generator[3720]: Ignoring "noauto" for root device
	[  +0.101934] systemd-fstab-generator[3734]: Ignoring "noauto" for root device
	[  +2.171564] overlayfs: '/var/lib/docker/overlay2/l/RXS2GBUI4RFFHRPJ6ORFBUXJHB' not a directory
	[  +0.029941] overlayfs: '/var/lib/docker/overlay2/l/VIJJY7O6JAYQALROQBTOSZAO5B' not a directory
	[  +0.302228] systemd-fstab-generator[4391]: Ignoring "noauto" for root device
	[Jul30 00:02] kauditd_printk_skb: 52 callbacks suppressed
	[ +13.032391] systemd-fstab-generator[6217]: Ignoring "noauto" for root device
	[  +9.361176] kauditd_printk_skb: 36 callbacks suppressed
	[  +0.795553] systemd-fstab-generator[6870]: Ignoring "noauto" for root device
	[Jul30 00:06] systemd-fstab-generator[14659]: Ignoring "noauto" for root device
	[  +5.686645] systemd-fstab-generator[15245]: Ignoring "noauto" for root device
	[  +0.443014] systemd-fstab-generator[15379]: Ignoring "noauto" for root device
	
	
	==> etcd [eed3acf2d926] <==
	{"level":"info","ts":"2024-07-30T00:06:32.910Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-30T00:06:32.912Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-30T00:06:32.919Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-30T00:06:32.920Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-30T00:06:32.920Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-30T00:06:32.920Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-30T00:06:32.920Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-30T00:06:33.607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-30T00:06:33.607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-30T00:06:33.607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-30T00:06:33.607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-30T00:06:33.607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-30T00:06:33.607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-30T00:06:33.607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-30T00:06:33.607Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-449000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-30T00:06:33.607Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T00:06:33.607Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T00:06:33.608Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-30T00:06:33.608Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T00:06:33.608Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-30T00:06:33.608Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-30T00:06:33.608Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-30T00:06:33.614Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T00:06:33.614Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T00:06:33.614Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 00:10:54 up 10 min,  0 users,  load average: 0.13, 0.27, 0.16
	Linux running-upgrade-449000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [80dd326861ff] <==
	I0730 00:06:34.874196       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0730 00:06:34.874204       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0730 00:06:34.875420       1 cache.go:39] Caches are synced for autoregister controller
	I0730 00:06:34.875555       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0730 00:06:34.875589       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0730 00:06:34.878883       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0730 00:06:34.900160       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0730 00:06:35.611837       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0730 00:06:35.778002       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0730 00:06:35.780378       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0730 00:06:35.780397       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0730 00:06:35.904641       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0730 00:06:35.915146       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0730 00:06:35.937389       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0730 00:06:35.939412       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0730 00:06:35.939788       1 controller.go:611] quota admission added evaluator for: endpoints
	I0730 00:06:35.941013       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0730 00:06:36.908756       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0730 00:06:37.457169       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0730 00:06:37.460359       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0730 00:06:37.464536       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0730 00:06:37.507005       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0730 00:06:51.091302       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0730 00:06:51.189478       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0730 00:06:51.557088       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [e9de93f280c4] <==
	I0730 00:06:50.233488       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0730 00:06:50.236682       1 shared_informer.go:262] Caches are synced for taint
	I0730 00:06:50.236728       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0730 00:06:50.236745       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-449000. Assuming now as a timestamp.
	I0730 00:06:50.236780       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0730 00:06:50.236826       1 shared_informer.go:262] Caches are synced for deployment
	I0730 00:06:50.236892       1 event.go:294] "Event occurred" object="running-upgrade-449000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-449000 event: Registered Node running-upgrade-449000 in Controller"
	I0730 00:06:50.236925       1 shared_informer.go:262] Caches are synced for PV protection
	I0730 00:06:50.236936       1 shared_informer.go:262] Caches are synced for cronjob
	I0730 00:06:50.236967       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0730 00:06:50.237322       1 shared_informer.go:262] Caches are synced for attach detach
	I0730 00:06:50.237604       1 shared_informer.go:262] Caches are synced for crt configmap
	I0730 00:06:50.336954       1 shared_informer.go:262] Caches are synced for stateful set
	I0730 00:06:50.422584       1 shared_informer.go:262] Caches are synced for resource quota
	I0730 00:06:50.430723       1 shared_informer.go:262] Caches are synced for disruption
	I0730 00:06:50.430737       1 disruption.go:371] Sending events to api server.
	I0730 00:06:50.439161       1 shared_informer.go:262] Caches are synced for resource quota
	I0730 00:06:50.494976       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0730 00:06:50.895706       1 shared_informer.go:262] Caches are synced for garbage collector
	I0730 00:06:50.937301       1 shared_informer.go:262] Caches are synced for garbage collector
	I0730 00:06:50.937389       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0730 00:06:51.093907       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q5zg2"
	I0730 00:06:51.190927       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0730 00:06:51.339403       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-8gbzq"
	I0730 00:06:51.343595       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-kc4f2"
	
	
	==> kube-proxy [b268d01093b4] <==
	I0730 00:06:51.545316       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0730 00:06:51.545342       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0730 00:06:51.545351       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0730 00:06:51.554449       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0730 00:06:51.554463       1 server_others.go:206] "Using iptables Proxier"
	I0730 00:06:51.554476       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0730 00:06:51.554596       1 server.go:661] "Version info" version="v1.24.1"
	I0730 00:06:51.554629       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:06:51.554900       1 config.go:317] "Starting service config controller"
	I0730 00:06:51.554913       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0730 00:06:51.554954       1 config.go:226] "Starting endpoint slice config controller"
	I0730 00:06:51.554963       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0730 00:06:51.555345       1 config.go:444] "Starting node config controller"
	I0730 00:06:51.555372       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0730 00:06:51.655881       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0730 00:06:51.655898       1 shared_informer.go:262] Caches are synced for service config
	I0730 00:06:51.655881       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [ab12392f1d97] <==
	W0730 00:06:34.830336       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 00:06:34.830339       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0730 00:06:34.830352       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0730 00:06:34.830355       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0730 00:06:34.830366       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 00:06:34.830370       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 00:06:34.830381       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0730 00:06:34.830384       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0730 00:06:34.830425       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0730 00:06:34.830433       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0730 00:06:34.831346       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0730 00:06:34.831369       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0730 00:06:34.831391       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0730 00:06:34.831396       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0730 00:06:35.660524       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0730 00:06:35.660631       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0730 00:06:35.702162       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0730 00:06:35.702240       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0730 00:06:35.812305       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0730 00:06:35.812385       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0730 00:06:35.839347       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 00:06:35.839364       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 00:06:35.865140       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0730 00:06:35.865227       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0730 00:06:36.227192       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-07-30 00:00:52 UTC, ends at Tue 2024-07-30 00:10:55 UTC. --
	Jul 30 00:06:50 running-upgrade-449000 kubelet[15251]: I0730 00:06:50.223094   15251 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 30 00:06:50 running-upgrade-449000 kubelet[15251]: I0730 00:06:50.223558   15251 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 30 00:06:50 running-upgrade-449000 kubelet[15251]: I0730 00:06:50.242353   15251 topology_manager.go:200] "Topology Admit Handler"
	Jul 30 00:06:50 running-upgrade-449000 kubelet[15251]: I0730 00:06:50.323445   15251 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/86a9e26b-0f9c-42d8-9b5d-a926efc5de27-tmp\") pod \"storage-provisioner\" (UID: \"86a9e26b-0f9c-42d8-9b5d-a926efc5de27\") " pod="kube-system/storage-provisioner"
	Jul 30 00:06:50 running-upgrade-449000 kubelet[15251]: I0730 00:06:50.323474   15251 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s7gk\" (UniqueName: \"kubernetes.io/projected/86a9e26b-0f9c-42d8-9b5d-a926efc5de27-kube-api-access-4s7gk\") pod \"storage-provisioner\" (UID: \"86a9e26b-0f9c-42d8-9b5d-a926efc5de27\") " pod="kube-system/storage-provisioner"
	Jul 30 00:06:50 running-upgrade-449000 kubelet[15251]: E0730 00:06:50.427498   15251 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 30 00:06:50 running-upgrade-449000 kubelet[15251]: E0730 00:06:50.427519   15251 projected.go:192] Error preparing data for projected volume kube-api-access-4s7gk for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 30 00:06:50 running-upgrade-449000 kubelet[15251]: E0730 00:06:50.427564   15251 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/86a9e26b-0f9c-42d8-9b5d-a926efc5de27-kube-api-access-4s7gk podName:86a9e26b-0f9c-42d8-9b5d-a926efc5de27 nodeName:}" failed. No retries permitted until 2024-07-30 00:06:50.92754251 +0000 UTC m=+13.484434390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4s7gk" (UniqueName: "kubernetes.io/projected/86a9e26b-0f9c-42d8-9b5d-a926efc5de27-kube-api-access-4s7gk") pod "storage-provisioner" (UID: "86a9e26b-0f9c-42d8-9b5d-a926efc5de27") : configmap "kube-root-ca.crt" not found
	Jul 30 00:06:50 running-upgrade-449000 kubelet[15251]: E0730 00:06:50.930675   15251 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 30 00:06:50 running-upgrade-449000 kubelet[15251]: E0730 00:06:50.930704   15251 projected.go:192] Error preparing data for projected volume kube-api-access-4s7gk for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 30 00:06:50 running-upgrade-449000 kubelet[15251]: E0730 00:06:50.930736   15251 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/86a9e26b-0f9c-42d8-9b5d-a926efc5de27-kube-api-access-4s7gk podName:86a9e26b-0f9c-42d8-9b5d-a926efc5de27 nodeName:}" failed. No retries permitted until 2024-07-30 00:06:51.930725114 +0000 UTC m=+14.487616995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4s7gk" (UniqueName: "kubernetes.io/projected/86a9e26b-0f9c-42d8-9b5d-a926efc5de27-kube-api-access-4s7gk") pod "storage-provisioner" (UID: "86a9e26b-0f9c-42d8-9b5d-a926efc5de27") : configmap "kube-root-ca.crt" not found
	Jul 30 00:06:51 running-upgrade-449000 kubelet[15251]: I0730 00:06:51.096500   15251 topology_manager.go:200] "Topology Admit Handler"
	Jul 30 00:06:51 running-upgrade-449000 kubelet[15251]: I0730 00:06:51.132316   15251 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27376224-9698-429d-b81f-b76dbd64c964-xtables-lock\") pod \"kube-proxy-q5zg2\" (UID: \"27376224-9698-429d-b81f-b76dbd64c964\") " pod="kube-system/kube-proxy-q5zg2"
	Jul 30 00:06:51 running-upgrade-449000 kubelet[15251]: I0730 00:06:51.132390   15251 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2qlb\" (UniqueName: \"kubernetes.io/projected/27376224-9698-429d-b81f-b76dbd64c964-kube-api-access-g2qlb\") pod \"kube-proxy-q5zg2\" (UID: \"27376224-9698-429d-b81f-b76dbd64c964\") " pod="kube-system/kube-proxy-q5zg2"
	Jul 30 00:06:51 running-upgrade-449000 kubelet[15251]: I0730 00:06:51.132408   15251 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27376224-9698-429d-b81f-b76dbd64c964-kube-proxy\") pod \"kube-proxy-q5zg2\" (UID: \"27376224-9698-429d-b81f-b76dbd64c964\") " pod="kube-system/kube-proxy-q5zg2"
	Jul 30 00:06:51 running-upgrade-449000 kubelet[15251]: I0730 00:06:51.132419   15251 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27376224-9698-429d-b81f-b76dbd64c964-lib-modules\") pod \"kube-proxy-q5zg2\" (UID: \"27376224-9698-429d-b81f-b76dbd64c964\") " pod="kube-system/kube-proxy-q5zg2"
	Jul 30 00:06:51 running-upgrade-449000 kubelet[15251]: I0730 00:06:51.342173   15251 topology_manager.go:200] "Topology Admit Handler"
	Jul 30 00:06:51 running-upgrade-449000 kubelet[15251]: I0730 00:06:51.347569   15251 topology_manager.go:200] "Topology Admit Handler"
	Jul 30 00:06:51 running-upgrade-449000 kubelet[15251]: I0730 00:06:51.439786   15251 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a028a0bb-0ccd-440d-a905-e5350bcecfcd-config-volume\") pod \"coredns-6d4b75cb6d-8gbzq\" (UID: \"a028a0bb-0ccd-440d-a905-e5350bcecfcd\") " pod="kube-system/coredns-6d4b75cb6d-8gbzq"
	Jul 30 00:06:51 running-upgrade-449000 kubelet[15251]: I0730 00:06:51.439896   15251 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5r7k\" (UniqueName: \"kubernetes.io/projected/a028a0bb-0ccd-440d-a905-e5350bcecfcd-kube-api-access-h5r7k\") pod \"coredns-6d4b75cb6d-8gbzq\" (UID: \"a028a0bb-0ccd-440d-a905-e5350bcecfcd\") " pod="kube-system/coredns-6d4b75cb6d-8gbzq"
	Jul 30 00:06:51 running-upgrade-449000 kubelet[15251]: I0730 00:06:51.439910   15251 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c90128a-b29a-4bde-9204-0f7b442aad47-config-volume\") pod \"coredns-6d4b75cb6d-kc4f2\" (UID: \"3c90128a-b29a-4bde-9204-0f7b442aad47\") " pod="kube-system/coredns-6d4b75cb6d-kc4f2"
	Jul 30 00:06:51 running-upgrade-449000 kubelet[15251]: I0730 00:06:51.439921   15251 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnlzl\" (UniqueName: \"kubernetes.io/projected/3c90128a-b29a-4bde-9204-0f7b442aad47-kube-api-access-tnlzl\") pod \"coredns-6d4b75cb6d-kc4f2\" (UID: \"3c90128a-b29a-4bde-9204-0f7b442aad47\") " pod="kube-system/coredns-6d4b75cb6d-kc4f2"
	Jul 30 00:06:52 running-upgrade-449000 kubelet[15251]: I0730 00:06:52.698522   15251 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0181b520d18595138f1f949d5b328182b72258160544180761f4b3f8d5fa06fe"
	Jul 30 00:10:39 running-upgrade-449000 kubelet[15251]: I0730 00:10:39.910178   15251 scope.go:110] "RemoveContainer" containerID="4e7e6bd3ac0142c09131aa47492aae51b975647353f40ab2eb452f855ed72ff4"
	Jul 30 00:10:39 running-upgrade-449000 kubelet[15251]: I0730 00:10:39.937253   15251 scope.go:110] "RemoveContainer" containerID="f94cf076b90f52c40ed873c65dbfe34aa5f86433d31a111b91a1139a2702b2ab"
	
	
	==> storage-provisioner [34ac75df0db5] <==
	I0730 00:06:52.244764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0730 00:06:52.251059       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0730 00:06:52.251080       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0730 00:06:52.254898       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0730 00:06:52.255011       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a48826f-61cc-4a98-bb5f-63fa8de1fccc", APIVersion:"v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-449000_8fde6166-ca91-41f4-8efb-a959d773d4e4 became leader
	I0730 00:06:52.255083       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-449000_8fde6166-ca91-41f4-8efb-a959d773d4e4!
	I0730 00:06:52.355875       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-449000_8fde6166-ca91-41f4-8efb-a959d773d4e4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-449000 -n running-upgrade-449000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-449000 -n running-upgrade-449000: exit status 2 (15.629681041s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-449000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-449000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-449000
--- FAIL: TestRunningBinaryUpgrade (653.98s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.44s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-457000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-457000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.990543083s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-457000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-457000" primary control-plane node in "kubernetes-upgrade-457000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-457000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:59:59.509813    9163 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:59:59.509952    9163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:59:59.509956    9163 out.go:304] Setting ErrFile to fd 2...
	I0729 16:59:59.509958    9163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:59:59.510076    9163 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:59:59.511166    9163 out.go:298] Setting JSON to false
	I0729 16:59:59.527135    9163 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5366,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:59:59.527222    9163 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:59:59.532391    9163 out.go:177] * [kubernetes-upgrade-457000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:59:59.540312    9163 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:59:59.540363    9163 notify.go:220] Checking for updates...
	I0729 16:59:59.545694    9163 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:59:59.549288    9163 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:59:59.552325    9163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:59:59.555330    9163 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:59:59.558344    9163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:59:59.561612    9163 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:59:59.561677    9163 config.go:182] Loaded profile config "offline-docker-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:59:59.561727    9163 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:59:59.566324    9163 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:59:59.573267    9163 start.go:297] selected driver: qemu2
	I0729 16:59:59.573274    9163 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:59:59.573281    9163 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:59:59.575805    9163 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:59:59.580266    9163 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:59:59.583351    9163 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:59:59.583385    9163 cni.go:84] Creating CNI manager for ""
	I0729 16:59:59.583392    9163 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 16:59:59.583419    9163 start.go:340] cluster config:
	{Name:kubernetes-upgrade-457000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-457000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:59:59.587346    9163 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:59:59.596199    9163 out.go:177] * Starting "kubernetes-upgrade-457000" primary control-plane node in "kubernetes-upgrade-457000" cluster
	I0729 16:59:59.600319    9163 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:59:59.600335    9163 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:59:59.600349    9163 cache.go:56] Caching tarball of preloaded images
	I0729 16:59:59.600424    9163 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:59:59.600430    9163 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 16:59:59.600492    9163 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/kubernetes-upgrade-457000/config.json ...
	I0729 16:59:59.600507    9163 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/kubernetes-upgrade-457000/config.json: {Name:mkcd5d70c8042f8382b711c76851cdb3076d3250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:59:59.600873    9163 start.go:360] acquireMachinesLock for kubernetes-upgrade-457000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:59:59.635127    9163 start.go:364] duration metric: took 34.241333ms to acquireMachinesLock for "kubernetes-upgrade-457000"
	I0729 16:59:59.635143    9163 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-457000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:59:59.635186    9163 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:59:59.641347    9163 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:59:59.662973    9163 start.go:159] libmachine.API.Create for "kubernetes-upgrade-457000" (driver="qemu2")
	I0729 16:59:59.663008    9163 client.go:168] LocalClient.Create starting
	I0729 16:59:59.663071    9163 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 16:59:59.663105    9163 main.go:141] libmachine: Decoding PEM data...
	I0729 16:59:59.663116    9163 main.go:141] libmachine: Parsing certificate...
	I0729 16:59:59.663159    9163 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 16:59:59.663185    9163 main.go:141] libmachine: Decoding PEM data...
	I0729 16:59:59.663198    9163 main.go:141] libmachine: Parsing certificate...
	I0729 16:59:59.667672    9163 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:59:59.840327    9163 main.go:141] libmachine: Creating SSH key...
	I0729 17:00:00.055921    9163 main.go:141] libmachine: Creating Disk image...
	I0729 17:00:00.055935    9163 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:00:00.056200    9163 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2
	I0729 17:00:00.070630    9163 main.go:141] libmachine: STDOUT: 
	I0729 17:00:00.070647    9163 main.go:141] libmachine: STDERR: 
	I0729 17:00:00.070702    9163 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2 +20000M
	I0729 17:00:00.078691    9163 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:00:00.078708    9163 main.go:141] libmachine: STDERR: 
	I0729 17:00:00.078720    9163 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2
	I0729 17:00:00.078725    9163 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:00:00.078737    9163 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:00:00.078768    9163 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:be:74:ac:fe:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2
	I0729 17:00:00.080486    9163 main.go:141] libmachine: STDOUT: 
	I0729 17:00:00.080500    9163 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:00:00.080518    9163 client.go:171] duration metric: took 417.504167ms to LocalClient.Create
	I0729 17:00:02.082690    9163 start.go:128] duration metric: took 2.447478583s to createHost
	I0729 17:00:02.082751    9163 start.go:83] releasing machines lock for "kubernetes-upgrade-457000", held for 2.447615416s
	W0729 17:00:02.082807    9163 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:00:02.097213    9163 out.go:177] * Deleting "kubernetes-upgrade-457000" in qemu2 ...
	W0729 17:00:02.121341    9163 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:00:02.121380    9163 start.go:729] Will try again in 5 seconds ...
	I0729 17:00:07.125415    9163 start.go:360] acquireMachinesLock for kubernetes-upgrade-457000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:00:07.125795    9163 start.go:364] duration metric: took 226.583µs to acquireMachinesLock for "kubernetes-upgrade-457000"
	I0729 17:00:07.125891    9163 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-457000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:00:07.126115    9163 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:00:07.130101    9163 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:00:07.170772    9163 start.go:159] libmachine.API.Create for "kubernetes-upgrade-457000" (driver="qemu2")
	I0729 17:00:07.170819    9163 client.go:168] LocalClient.Create starting
	I0729 17:00:07.170925    9163 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:00:07.170970    9163 main.go:141] libmachine: Decoding PEM data...
	I0729 17:00:07.170984    9163 main.go:141] libmachine: Parsing certificate...
	I0729 17:00:07.171031    9163 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:00:07.171057    9163 main.go:141] libmachine: Decoding PEM data...
	I0729 17:00:07.171070    9163 main.go:141] libmachine: Parsing certificate...
	I0729 17:00:07.172304    9163 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:00:07.361826    9163 main.go:141] libmachine: Creating SSH key...
	I0729 17:00:07.410413    9163 main.go:141] libmachine: Creating Disk image...
	I0729 17:00:07.410422    9163 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:00:07.410585    9163 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2
	I0729 17:00:07.420002    9163 main.go:141] libmachine: STDOUT: 
	I0729 17:00:07.420018    9163 main.go:141] libmachine: STDERR: 
	I0729 17:00:07.420067    9163 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2 +20000M
	I0729 17:00:07.427866    9163 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:00:07.427893    9163 main.go:141] libmachine: STDERR: 
	I0729 17:00:07.427908    9163 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2
	I0729 17:00:07.427913    9163 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:00:07.427922    9163 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:00:07.427950    9163 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:19:3f:5e:88:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2
	I0729 17:00:07.429597    9163 main.go:141] libmachine: STDOUT: 
	I0729 17:00:07.429611    9163 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:00:07.429624    9163 client.go:171] duration metric: took 258.800291ms to LocalClient.Create
	I0729 17:00:09.431764    9163 start.go:128] duration metric: took 2.305629334s to createHost
	I0729 17:00:09.431816    9163 start.go:83] releasing machines lock for "kubernetes-upgrade-457000", held for 2.306003291s
	W0729 17:00:09.432101    9163 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-457000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-457000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:00:09.444492    9163 out.go:177] 
	W0729 17:00:09.449533    9163 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:00:09.449556    9163 out.go:239] * 
	* 
	W0729 17:00:09.452271    9163 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:00:09.463462    9163 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-457000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-457000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-457000: (2.000521959s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-457000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-457000 status --format={{.Host}}: exit status 7 (65.584459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-457000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-457000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.208507917s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-457000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-457000" primary control-plane node in "kubernetes-upgrade-457000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-457000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-457000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:00:11.574047    9462 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:00:11.574176    9462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:00:11.574178    9462 out.go:304] Setting ErrFile to fd 2...
	I0729 17:00:11.574181    9462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:00:11.574323    9462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:00:11.575386    9462 out.go:298] Setting JSON to false
	I0729 17:00:11.591622    9462 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5378,"bootTime":1722292233,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:00:11.591701    9462 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:00:11.597570    9462 out.go:177] * [kubernetes-upgrade-457000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:00:11.604550    9462 notify.go:220] Checking for updates...
	I0729 17:00:11.608560    9462 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:00:11.614516    9462 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:00:11.620486    9462 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:00:11.628548    9462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:00:11.636520    9462 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:00:11.644505    9462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:00:11.647802    9462 config.go:182] Loaded profile config "kubernetes-upgrade-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 17:00:11.648065    9462 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:00:11.651546    9462 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 17:00:11.659511    9462 start.go:297] selected driver: qemu2
	I0729 17:00:11.659516    9462 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-457000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:00:11.659559    9462 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:00:11.661852    9462 cni.go:84] Creating CNI manager for ""
	I0729 17:00:11.661869    9462 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:00:11.661897    9462 start.go:340] cluster config:
	{Name:kubernetes-upgrade-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-457000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:00:11.665261    9462 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:00:11.672525    9462 out.go:177] * Starting "kubernetes-upgrade-457000" primary control-plane node in "kubernetes-upgrade-457000" cluster
	I0729 17:00:11.675565    9462 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 17:00:11.675579    9462 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 17:00:11.675589    9462 cache.go:56] Caching tarball of preloaded images
	I0729 17:00:11.675644    9462 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:00:11.675650    9462 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 17:00:11.675703    9462 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/kubernetes-upgrade-457000/config.json ...
	I0729 17:00:11.675972    9462 start.go:360] acquireMachinesLock for kubernetes-upgrade-457000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:00:11.676005    9462 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "kubernetes-upgrade-457000"
	I0729 17:00:11.676014    9462 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:00:11.676021    9462 fix.go:54] fixHost starting: 
	I0729 17:00:11.676139    9462 fix.go:112] recreateIfNeeded on kubernetes-upgrade-457000: state=Stopped err=<nil>
	W0729 17:00:11.676147    9462 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:00:11.679497    9462 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-457000" ...
	I0729 17:00:11.687529    9462 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:00:11.687560    9462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:19:3f:5e:88:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2
	I0729 17:00:11.689382    9462 main.go:141] libmachine: STDOUT: 
	I0729 17:00:11.689400    9462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:00:11.689426    9462 fix.go:56] duration metric: took 13.405834ms for fixHost
	I0729 17:00:11.689431    9462 start.go:83] releasing machines lock for "kubernetes-upgrade-457000", held for 13.421541ms
	W0729 17:00:11.689437    9462 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:00:11.689471    9462 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:00:11.689475    9462 start.go:729] Will try again in 5 seconds ...
	I0729 17:00:16.689875    9462 start.go:360] acquireMachinesLock for kubernetes-upgrade-457000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:00:16.690589    9462 start.go:364] duration metric: took 552.042µs to acquireMachinesLock for "kubernetes-upgrade-457000"
	I0729 17:00:16.690747    9462 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:00:16.690769    9462 fix.go:54] fixHost starting: 
	I0729 17:00:16.691496    9462 fix.go:112] recreateIfNeeded on kubernetes-upgrade-457000: state=Stopped err=<nil>
	W0729 17:00:16.691525    9462 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:00:16.695753    9462 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-457000" ...
	I0729 17:00:16.705618    9462 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:00:16.706010    9462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:19:3f:5e:88:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubernetes-upgrade-457000/disk.qcow2
	I0729 17:00:16.715425    9462 main.go:141] libmachine: STDOUT: 
	I0729 17:00:16.715501    9462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:00:16.715596    9462 fix.go:56] duration metric: took 24.8275ms for fixHost
	I0729 17:00:16.715623    9462 start.go:83] releasing machines lock for "kubernetes-upgrade-457000", held for 25.008209ms
	W0729 17:00:16.715830    9462 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-457000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-457000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:00:16.724625    9462 out.go:177] 
	W0729 17:00:16.728886    9462 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:00:16.728935    9462 out.go:239] * 
	* 
	W0729 17:00:16.731341    9462 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:00:16.740660    9462 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-457000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-457000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-457000 version --output=json: exit status 1 (61.827959ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-457000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-29 17:00:16.815423 -0700 PDT m=+774.121768210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-457000 -n kubernetes-upgrade-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-457000 -n kubernetes-upgrade-457000: exit status 7 (32.63775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-457000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-457000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-457000
--- FAIL: TestKubernetesUpgrade (17.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (582.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.756487293 start -p stopped-upgrade-208000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.756487293 start -p stopped-upgrade-208000 --memory=2200 --vm-driver=qemu2 : (49.849452583s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.756487293 -p stopped-upgrade-208000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.756487293 -p stopped-upgrade-208000 stop: (12.103127458s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-208000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-208000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.674161917s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-208000" primary control-plane node in "stopped-upgrade-208000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-208000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:01:10.753025    9508 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:01:10.753193    9508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:01:10.753197    9508 out.go:304] Setting ErrFile to fd 2...
	I0729 17:01:10.753200    9508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:01:10.753359    9508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:01:10.754568    9508 out.go:298] Setting JSON to false
	I0729 17:01:10.774187    9508 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5437,"bootTime":1722292233,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:01:10.774248    9508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:01:10.778647    9508 out.go:177] * [stopped-upgrade-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:01:10.786513    9508 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:01:10.786534    9508 notify.go:220] Checking for updates...
	I0729 17:01:10.793393    9508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:01:10.796496    9508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:01:10.799534    9508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:01:10.802489    9508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:01:10.805667    9508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:01:10.808807    9508 config.go:182] Loaded profile config "stopped-upgrade-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 17:01:10.810385    9508 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 17:01:10.813447    9508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:01:10.817523    9508 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 17:01:10.822460    9508 start.go:297] selected driver: qemu2
	I0729 17:01:10.822468    9508 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51259 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 17:01:10.822534    9508 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:01:10.825262    9508 cni.go:84] Creating CNI manager for ""
	I0729 17:01:10.825282    9508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:01:10.825308    9508 start.go:340] cluster config:
	{Name:stopped-upgrade-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51259 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 17:01:10.825363    9508 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:01:10.833495    9508 out.go:177] * Starting "stopped-upgrade-208000" primary control-plane node in "stopped-upgrade-208000" cluster
	I0729 17:01:10.837516    9508 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 17:01:10.837537    9508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 17:01:10.837548    9508 cache.go:56] Caching tarball of preloaded images
	I0729 17:01:10.837616    9508 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:01:10.837622    9508 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 17:01:10.837687    9508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/config.json ...
	I0729 17:01:10.838163    9508 start.go:360] acquireMachinesLock for stopped-upgrade-208000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:01:10.838198    9508 start.go:364] duration metric: took 28.916µs to acquireMachinesLock for "stopped-upgrade-208000"
	I0729 17:01:10.838207    9508 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:01:10.838212    9508 fix.go:54] fixHost starting: 
	I0729 17:01:10.838326    9508 fix.go:112] recreateIfNeeded on stopped-upgrade-208000: state=Stopped err=<nil>
	W0729 17:01:10.838335    9508 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:01:10.842404    9508 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-208000" ...
	I0729 17:01:10.850549    9508 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:01:10.850661    9508 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/stopped-upgrade-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/stopped-upgrade-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/stopped-upgrade-208000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51224-:22,hostfwd=tcp::51225-:2376,hostname=stopped-upgrade-208000 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/stopped-upgrade-208000/disk.qcow2
	I0729 17:01:10.899196    9508 main.go:141] libmachine: STDOUT: 
	I0729 17:01:10.899223    9508 main.go:141] libmachine: STDERR: 
	I0729 17:01:10.899229    9508 main.go:141] libmachine: Waiting for VM to start (ssh -p 51224 docker@127.0.0.1)...
	I0729 17:01:31.293078    9508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/config.json ...
	I0729 17:01:31.293651    9508 machine.go:94] provisionDockerMachine start ...
	I0729 17:01:31.293754    9508 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:31.294090    9508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bbaa10] 0x102bbd270 <nil>  [] 0s} localhost 51224 <nil> <nil>}
	I0729 17:01:31.294102    9508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 17:01:31.370214    9508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 17:01:31.370236    9508 buildroot.go:166] provisioning hostname "stopped-upgrade-208000"
	I0729 17:01:31.370307    9508 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:31.370457    9508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bbaa10] 0x102bbd270 <nil>  [] 0s} localhost 51224 <nil> <nil>}
	I0729 17:01:31.370463    9508 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-208000 && echo "stopped-upgrade-208000" | sudo tee /etc/hostname
	I0729 17:01:31.439650    9508 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-208000
	
	I0729 17:01:31.439702    9508 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:31.439824    9508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bbaa10] 0x102bbd270 <nil>  [] 0s} localhost 51224 <nil> <nil>}
	I0729 17:01:31.439831    9508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-208000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-208000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-208000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:01:31.501886    9508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:01:31.501908    9508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19346-7076/.minikube CaCertPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19346-7076/.minikube}
	I0729 17:01:31.501918    9508 buildroot.go:174] setting up certificates
	I0729 17:01:31.501922    9508 provision.go:84] configureAuth start
	I0729 17:01:31.501930    9508 provision.go:143] copyHostCerts
	I0729 17:01:31.502034    9508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.pem, removing ...
	I0729 17:01:31.502042    9508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.pem
	I0729 17:01:31.502163    9508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.pem (1082 bytes)
	I0729 17:01:31.502332    9508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19346-7076/.minikube/cert.pem, removing ...
	I0729 17:01:31.502338    9508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19346-7076/.minikube/cert.pem
	I0729 17:01:31.502397    9508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19346-7076/.minikube/cert.pem (1123 bytes)
	I0729 17:01:31.502509    9508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19346-7076/.minikube/key.pem, removing ...
	I0729 17:01:31.502512    9508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19346-7076/.minikube/key.pem
	I0729 17:01:31.502559    9508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19346-7076/.minikube/key.pem (1679 bytes)
	I0729 17:01:31.502645    9508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-208000 san=[127.0.0.1 localhost minikube stopped-upgrade-208000]
	I0729 17:01:31.609152    9508 provision.go:177] copyRemoteCerts
	I0729 17:01:31.609203    9508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:01:31.609212    9508 sshutil.go:53] new ssh client: &{IP:localhost Port:51224 SSHKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/stopped-upgrade-208000/id_rsa Username:docker}
	I0729 17:01:31.643015    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 17:01:31.650204    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 17:01:31.657004    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 17:01:31.663121    9508 provision.go:87] duration metric: took 161.1945ms to configureAuth
	I0729 17:01:31.663132    9508 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:01:31.663245    9508 config.go:182] Loaded profile config "stopped-upgrade-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 17:01:31.663278    9508 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:31.663407    9508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bbaa10] 0x102bbd270 <nil>  [] 0s} localhost 51224 <nil> <nil>}
	I0729 17:01:31.663412    9508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 17:01:31.723464    9508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 17:01:31.723473    9508 buildroot.go:70] root file system type: tmpfs
	I0729 17:01:31.723527    9508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 17:01:31.723603    9508 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:31.723726    9508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bbaa10] 0x102bbd270 <nil>  [] 0s} localhost 51224 <nil> <nil>}
	I0729 17:01:31.723758    9508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 17:01:31.786179    9508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 17:01:31.786233    9508 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:31.786356    9508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bbaa10] 0x102bbd270 <nil>  [] 0s} localhost 51224 <nil> <nil>}
	I0729 17:01:31.786364    9508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 17:01:32.148571    9508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 17:01:32.148583    9508 machine.go:97] duration metric: took 854.924917ms to provisionDockerMachine
	I0729 17:01:32.148589    9508 start.go:293] postStartSetup for "stopped-upgrade-208000" (driver="qemu2")
	I0729 17:01:32.148594    9508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:01:32.148667    9508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:01:32.148678    9508 sshutil.go:53] new ssh client: &{IP:localhost Port:51224 SSHKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/stopped-upgrade-208000/id_rsa Username:docker}
	I0729 17:01:32.184399    9508 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:01:32.185545    9508 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 17:01:32.185553    9508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19346-7076/.minikube/addons for local assets ...
	I0729 17:01:32.185640    9508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19346-7076/.minikube/files for local assets ...
	I0729 17:01:32.185771    9508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19346-7076/.minikube/files/etc/ssl/certs/75652.pem -> 75652.pem in /etc/ssl/certs
	I0729 17:01:32.185906    9508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:01:32.188478    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/files/etc/ssl/certs/75652.pem --> /etc/ssl/certs/75652.pem (1708 bytes)
	I0729 17:01:32.195155    9508 start.go:296] duration metric: took 46.559792ms for postStartSetup
	I0729 17:01:32.195168    9508 fix.go:56] duration metric: took 21.356970792s for fixHost
	I0729 17:01:32.195200    9508 main.go:141] libmachine: Using SSH client type: native
	I0729 17:01:32.195312    9508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bbaa10] 0x102bbd270 <nil>  [] 0s} localhost 51224 <nil> <nil>}
	I0729 17:01:32.195316    9508 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 17:01:32.256345    9508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722297692.577306546
	
	I0729 17:01:32.256357    9508 fix.go:216] guest clock: 1722297692.577306546
	I0729 17:01:32.256361    9508 fix.go:229] Guest: 2024-07-29 17:01:32.577306546 -0700 PDT Remote: 2024-07-29 17:01:32.19517 -0700 PDT m=+21.475146292 (delta=382.136546ms)
	I0729 17:01:32.256379    9508 fix.go:200] guest clock delta is within tolerance: 382.136546ms
	I0729 17:01:32.256381    9508 start.go:83] releasing machines lock for "stopped-upgrade-208000", held for 21.41819175s
	I0729 17:01:32.256463    9508 ssh_runner.go:195] Run: cat /version.json
	I0729 17:01:32.256474    9508 sshutil.go:53] new ssh client: &{IP:localhost Port:51224 SSHKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/stopped-upgrade-208000/id_rsa Username:docker}
	I0729 17:01:32.256481    9508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:01:32.256499    9508 sshutil.go:53] new ssh client: &{IP:localhost Port:51224 SSHKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/stopped-upgrade-208000/id_rsa Username:docker}
	W0729 17:01:32.257285    9508 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51224: connect: connection refused
	I0729 17:01:32.257318    9508 retry.go:31] will retry after 171.761958ms: dial tcp [::1]:51224: connect: connection refused
	W0729 17:01:32.462730    9508 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 17:01:32.462806    9508 ssh_runner.go:195] Run: systemctl --version
	I0729 17:01:32.465199    9508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:01:32.467589    9508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:01:32.467645    9508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 17:01:32.471305    9508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 17:01:32.476867    9508 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:01:32.476880    9508 start.go:495] detecting cgroup driver to use...
	I0729 17:01:32.477003    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:01:32.484749    9508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 17:01:32.488687    9508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 17:01:32.492774    9508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 17:01:32.492828    9508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 17:01:32.496716    9508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 17:01:32.500291    9508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 17:01:32.503389    9508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 17:01:32.506319    9508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:01:32.509409    9508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 17:01:32.512345    9508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 17:01:32.515648    9508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 17:01:32.518645    9508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:01:32.521842    9508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:01:32.525920    9508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:01:32.588626    9508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 17:01:32.595721    9508 start.go:495] detecting cgroup driver to use...
	I0729 17:01:32.595805    9508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 17:01:32.602041    9508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:01:32.607486    9508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:01:32.613915    9508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:01:32.618844    9508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 17:01:32.623446    9508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 17:01:32.646103    9508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 17:01:32.651383    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:01:32.656595    9508 ssh_runner.go:195] Run: which cri-dockerd
	I0729 17:01:32.657847    9508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 17:01:32.660853    9508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 17:01:32.666034    9508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 17:01:32.730700    9508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 17:01:32.795944    9508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 17:01:32.796000    9508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 17:01:32.801445    9508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:01:32.869320    9508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 17:01:33.980461    9508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.111125625s)
	I0729 17:01:33.980524    9508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 17:01:33.985196    9508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 17:01:33.989669    9508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 17:01:34.045609    9508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 17:01:34.114503    9508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:01:34.177869    9508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 17:01:34.184337    9508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 17:01:34.188456    9508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:01:34.254770    9508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 17:01:34.292240    9508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 17:01:34.292322    9508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 17:01:34.294275    9508 start.go:563] Will wait 60s for crictl version
	I0729 17:01:34.294326    9508 ssh_runner.go:195] Run: which crictl
	I0729 17:01:34.295745    9508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:01:34.309497    9508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 17:01:34.309575    9508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 17:01:34.325727    9508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 17:01:34.346676    9508 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 17:01:34.346796    9508 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 17:01:34.348148    9508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:01:34.351718    9508 kubeadm.go:883] updating cluster {Name:stopped-upgrade-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51259 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 17:01:34.351763    9508 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 17:01:34.351801    9508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 17:01:34.362060    9508 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 17:01:34.362069    9508 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 17:01:34.362123    9508 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 17:01:34.365597    9508 ssh_runner.go:195] Run: which lz4
	I0729 17:01:34.366810    9508 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 17:01:34.368130    9508 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 17:01:34.368141    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 17:01:35.317448    9508 docker.go:649] duration metric: took 950.66625ms to copy over tarball
	I0729 17:01:35.317507    9508 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 17:01:36.470318    9508 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.152794458s)
	I0729 17:01:36.470332    9508 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 17:01:36.486751    9508 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 17:01:36.490133    9508 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 17:01:36.495641    9508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:01:36.558430    9508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 17:01:37.733543    9508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.175095792s)
	I0729 17:01:37.733636    9508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 17:01:37.747226    9508 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 17:01:37.747235    9508 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 17:01:37.747240    9508 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 17:01:37.751023    9508 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:01:37.752710    9508 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 17:01:37.754771    9508 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 17:01:37.754880    9508 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:01:37.756820    9508 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 17:01:37.756918    9508 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 17:01:37.759365    9508 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 17:01:37.759379    9508 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 17:01:37.760803    9508 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 17:01:37.760934    9508 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 17:01:37.762756    9508 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 17:01:37.762990    9508 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 17:01:37.764559    9508 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 17:01:37.764654    9508 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 17:01:37.765300    9508 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 17:01:37.766275    9508 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 17:01:38.177976    9508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 17:01:38.188568    9508 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 17:01:38.188607    9508 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 17:01:38.188667    9508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 17:01:38.199108    9508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 17:01:38.201889    9508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 17:01:38.213165    9508 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 17:01:38.213187    9508 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 17:01:38.213240    9508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 17:01:38.222610    9508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 17:01:38.223343    9508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 17:01:38.223439    9508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 17:01:38.233020    9508 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0729 17:01:38.233051    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0729 17:01:38.233081    9508 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 17:01:38.233097    9508 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 17:01:38.233141    9508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 17:01:38.233431    9508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	W0729 17:01:38.238035    9508 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 17:01:38.238176    9508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 17:01:38.238184    9508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 17:01:38.249567    9508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 17:01:38.274817    9508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 17:01:38.274844    9508 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 17:01:38.274862    9508 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 17:01:38.274910    9508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 17:01:38.289348    9508 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 17:01:38.289369    9508 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 17:01:38.289372    9508 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 17:01:38.289383    9508 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 17:01:38.289426    9508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 17:01:38.289426    9508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 17:01:38.298993    9508 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 17:01:38.299014    9508 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 17:01:38.299072    9508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 17:01:38.328576    9508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 17:01:38.333650    9508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 17:01:38.333780    9508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 17:01:38.338232    9508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 17:01:38.338348    9508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 17:01:38.345508    9508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 17:01:38.353195    9508 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 17:01:38.353220    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 17:01:38.354910    9508 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 17:01:38.354926    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 17:01:38.390158    9508 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 17:01:38.390179    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0729 17:01:38.406259    9508 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 17:01:38.406371    9508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:01:38.468118    9508 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 17:01:38.468162    9508 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 17:01:38.468183    9508 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:01:38.468237    9508 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:01:38.475092    9508 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 17:01:38.475107    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 17:01:38.509980    9508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 17:01:38.510121    9508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 17:01:38.601052    9508 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 17:01:38.601072    9508 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 17:01:38.601078    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0729 17:01:38.601120    9508 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 17:01:38.601146    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 17:01:38.774017    9508 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 17:01:38.774041    9508 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 17:01:38.774053    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 17:01:39.009721    9508 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 17:01:39.009762    9508 cache_images.go:92] duration metric: took 1.262516834s to LoadCachedImages
	W0729 17:01:39.009814    9508 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0729 17:01:39.009820    9508 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 17:01:39.009886    9508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-208000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:01:39.009947    9508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 17:01:39.023266    9508 cni.go:84] Creating CNI manager for ""
	I0729 17:01:39.023278    9508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:01:39.023283    9508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 17:01:39.023291    9508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-208000 NodeName:stopped-upgrade-208000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 17:01:39.023354    9508 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-208000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 17:01:39.023411    9508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 17:01:39.026628    9508 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 17:01:39.026660    9508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 17:01:39.029702    9508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 17:01:39.034859    9508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:01:39.039879    9508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 17:01:39.044988    9508 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 17:01:39.046244    9508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:01:39.050055    9508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:01:39.106368    9508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:01:39.116569    9508 certs.go:68] Setting up /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000 for IP: 10.0.2.15
	I0729 17:01:39.116579    9508 certs.go:194] generating shared ca certs ...
	I0729 17:01:39.116587    9508 certs.go:226] acquiring lock for ca certs: {Name:mk1e3a56a4c4fc5577b9072afde2d071febb00e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:01:39.116853    9508 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.key
	I0729 17:01:39.116904    9508 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/proxy-client-ca.key
	I0729 17:01:39.116909    9508 certs.go:256] generating profile certs ...
	I0729 17:01:39.116984    9508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/client.key
	I0729 17:01:39.116997    9508 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/apiserver.key.26965a75
	I0729 17:01:39.117009    9508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/apiserver.crt.26965a75 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 17:01:39.343097    9508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/apiserver.crt.26965a75 ...
	I0729 17:01:39.343111    9508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/apiserver.crt.26965a75: {Name:mke92e3602f55831186d0e9e0fdf2d24c0fd923d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:01:39.343419    9508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/apiserver.key.26965a75 ...
	I0729 17:01:39.343425    9508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/apiserver.key.26965a75: {Name:mkbd091e0973374dfd76d7b9b629cbc70cf160ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:01:39.343572    9508 certs.go:381] copying /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/apiserver.crt.26965a75 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/apiserver.crt
	I0729 17:01:39.343721    9508 certs.go:385] copying /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/apiserver.key.26965a75 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/apiserver.key
	I0729 17:01:39.343879    9508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/proxy-client.key
	I0729 17:01:39.344019    9508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/7565.pem (1338 bytes)
	W0729 17:01:39.344047    9508 certs.go:480] ignoring /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/7565_empty.pem, impossibly tiny 0 bytes
	I0729 17:01:39.344053    9508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 17:01:39.344073    9508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem (1082 bytes)
	I0729 17:01:39.344094    9508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:01:39.344116    9508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/key.pem (1679 bytes)
	I0729 17:01:39.344153    9508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19346-7076/.minikube/files/etc/ssl/certs/75652.pem (1708 bytes)
	I0729 17:01:39.344528    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:01:39.351690    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 17:01:39.358481    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:01:39.365733    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 17:01:39.373554    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 17:01:39.381006    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 17:01:39.387746    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:01:39.394282    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:01:39.401719    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:01:39.409186    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/7565.pem --> /usr/share/ca-certificates/7565.pem (1338 bytes)
	I0729 17:01:39.416076    9508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19346-7076/.minikube/files/etc/ssl/certs/75652.pem --> /usr/share/ca-certificates/75652.pem (1708 bytes)
	I0729 17:01:39.422709    9508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 17:01:39.428092    9508 ssh_runner.go:195] Run: openssl version
	I0729 17:01:39.430113    9508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75652.pem && ln -fs /usr/share/ca-certificates/75652.pem /etc/ssl/certs/75652.pem"
	I0729 17:01:39.433648    9508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75652.pem
	I0729 17:01:39.435057    9508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 23:48 /usr/share/ca-certificates/75652.pem
	I0729 17:01:39.435079    9508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75652.pem
	I0729 17:01:39.436784    9508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75652.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:01:39.439827    9508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:01:39.442780    9508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:01:39.444313    9508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:00 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:01:39.444336    9508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:01:39.446014    9508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:01:39.449293    9508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7565.pem && ln -fs /usr/share/ca-certificates/7565.pem /etc/ssl/certs/7565.pem"
	I0729 17:01:39.452355    9508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7565.pem
	I0729 17:01:39.453680    9508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 23:48 /usr/share/ca-certificates/7565.pem
	I0729 17:01:39.453701    9508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7565.pem
	I0729 17:01:39.455498    9508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7565.pem /etc/ssl/certs/51391683.0"
	I0729 17:01:39.458486    9508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:01:39.460145    9508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 17:01:39.462296    9508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 17:01:39.464262    9508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 17:01:39.466096    9508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 17:01:39.468012    9508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 17:01:39.469744    9508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 17:01:39.471595    9508 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51259 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 17:01:39.471659    9508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 17:01:39.481598    9508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 17:01:39.484539    9508 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 17:01:39.484545    9508 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 17:01:39.484568    9508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 17:01:39.487266    9508 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:01:39.487303    9508 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-208000" does not appear in /Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:01:39.487317    9508 kubeconfig.go:62] /Users/jenkins/minikube-integration/19346-7076/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-208000" cluster setting kubeconfig missing "stopped-upgrade-208000" context setting]
	I0729 17:01:39.487486    9508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/kubeconfig: {Name:mk580a93ad62a9c0663fd1e6ef1bfe6feb6bde87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:01:39.488141    9508 kapi.go:59] client config for stopped-upgrade-208000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/client.key", CAFile:"/Users/jenkins/minikube-integration/19346-7076/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103f501b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 17:01:39.489006    9508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 17:01:39.491617    9508 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-208000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 17:01:39.491623    9508 kubeadm.go:1160] stopping kube-system containers ...
	I0729 17:01:39.491663    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 17:01:39.502208    9508 docker.go:483] Stopping containers: [a3e5c6623186 34c6ea0e3a5b 86c7754ed8ba 1f5e563256bb b4051e54596c 237518adabe4 7f0a01357038 c8b37ecb9739]
	I0729 17:01:39.502270    9508 ssh_runner.go:195] Run: docker stop a3e5c6623186 34c6ea0e3a5b 86c7754ed8ba 1f5e563256bb b4051e54596c 237518adabe4 7f0a01357038 c8b37ecb9739
	I0729 17:01:39.513173    9508 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 17:01:39.518451    9508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 17:01:39.521664    9508 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 17:01:39.521671    9508 kubeadm.go:157] found existing configuration files:
	
	I0729 17:01:39.521698    9508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/admin.conf
	I0729 17:01:39.524488    9508 kubeadm.go:163] "https://control-plane.minikube.internal:51259" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 17:01:39.524510    9508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 17:01:39.527031    9508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/kubelet.conf
	I0729 17:01:39.530005    9508 kubeadm.go:163] "https://control-plane.minikube.internal:51259" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 17:01:39.530029    9508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 17:01:39.533047    9508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/controller-manager.conf
	I0729 17:01:39.535641    9508 kubeadm.go:163] "https://control-plane.minikube.internal:51259" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 17:01:39.535666    9508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 17:01:39.538327    9508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/scheduler.conf
	I0729 17:01:39.541345    9508 kubeadm.go:163] "https://control-plane.minikube.internal:51259" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 17:01:39.541367    9508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 17:01:39.544089    9508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 17:01:39.546819    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 17:01:39.568342    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 17:01:39.941498    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 17:01:40.052178    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 17:01:40.072504    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 17:01:40.092776    9508 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:01:40.092862    9508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:01:40.595018    9508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:01:41.094948    9508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:01:41.104328    9508 api_server.go:72] duration metric: took 1.011551458s to wait for apiserver process to appear ...
	I0729 17:01:41.104344    9508 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:01:41.104354    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:01:46.106615    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:01:46.106696    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:01:51.107669    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:01:51.107688    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:01:56.108186    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:01:56.108211    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:01.109203    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:01.109291    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:06.110896    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:06.111007    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:11.112924    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:11.112993    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:16.115287    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:16.115310    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:21.117509    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:21.117553    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:26.118785    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:26.118832    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:31.121059    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:31.121089    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:36.123285    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:36.123313    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:41.125559    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:41.125700    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:02:41.141144    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:02:41.141238    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:02:41.152282    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:02:41.152356    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:02:41.163332    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:02:41.163399    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:02:41.175693    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:02:41.175783    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:02:41.186097    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:02:41.186179    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:02:41.197241    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:02:41.197317    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:02:41.208241    9508 logs.go:276] 0 containers: []
	W0729 17:02:41.208252    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:02:41.208315    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:02:41.222998    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:02:41.223017    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:02:41.223022    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:02:41.234814    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:02:41.234825    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:02:41.246656    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:02:41.246668    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:02:41.264164    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:02:41.264177    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:02:41.275972    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:02:41.275985    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:02:41.287432    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:02:41.287443    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:02:41.313250    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:02:41.313259    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:02:41.325024    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:02:41.325036    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:02:41.353312    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:02:41.353322    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:02:41.367302    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:02:41.367312    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:02:41.385515    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:02:41.385525    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:02:41.390009    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:02:41.390015    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:02:41.411766    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:02:41.411776    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:02:41.512473    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:02:41.512484    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:02:41.526406    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:02:41.526416    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:02:41.537770    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:02:41.537782    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:02:41.553303    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:02:41.553315    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:02:44.094845    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:49.097207    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:49.097395    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:02:49.121654    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:02:49.121784    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:02:49.138361    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:02:49.138452    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:02:49.150726    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:02:49.150798    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:02:49.162021    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:02:49.162091    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:02:49.175447    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:02:49.175519    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:02:49.185716    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:02:49.185783    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:02:49.196986    9508 logs.go:276] 0 containers: []
	W0729 17:02:49.196999    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:02:49.197065    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:02:49.211782    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:02:49.211801    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:02:49.211806    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:02:49.231505    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:02:49.231520    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:02:49.248492    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:02:49.248503    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:02:49.273665    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:02:49.273678    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:02:49.278056    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:02:49.278062    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:02:49.313352    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:02:49.313366    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:02:49.327850    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:02:49.327859    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:02:49.356550    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:02:49.356562    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:02:49.368166    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:02:49.368180    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:02:49.380466    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:02:49.380484    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:02:49.394658    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:02:49.394671    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:02:49.409780    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:02:49.409791    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:02:49.424052    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:02:49.424064    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:02:49.439990    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:02:49.440000    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:02:49.450770    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:02:49.450783    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:02:49.489229    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:02:49.489238    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:02:49.500468    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:02:49.500482    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:02:52.014018    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:02:57.016463    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:02:57.016711    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:02:57.035904    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:02:57.036005    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:02:57.050516    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:02:57.050590    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:02:57.062506    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:02:57.062578    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:02:57.074353    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:02:57.074428    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:02:57.084594    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:02:57.084662    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:02:57.095338    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:02:57.095407    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:02:57.105352    9508 logs.go:276] 0 containers: []
	W0729 17:02:57.105363    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:02:57.105420    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:02:57.115712    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:02:57.115730    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:02:57.115735    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:02:57.127777    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:02:57.127788    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:02:57.152020    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:02:57.152034    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:02:57.165866    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:02:57.165877    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:02:57.179945    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:02:57.179959    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:02:57.191171    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:02:57.191183    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:02:57.216271    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:02:57.216281    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:02:57.227811    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:02:57.227825    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:02:57.238548    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:02:57.238558    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:02:57.252115    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:02:57.252126    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:02:57.263562    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:02:57.263576    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:02:57.277752    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:02:57.277765    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:02:57.289390    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:02:57.289401    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:02:57.303021    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:02:57.303032    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:02:57.321277    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:02:57.321291    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:02:57.358758    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:02:57.358769    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:02:57.362524    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:02:57.362533    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:02:59.901005    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:04.903259    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:04.903371    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:03:04.914599    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:03:04.914677    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:03:04.925271    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:03:04.925347    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:03:04.950266    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:03:04.950341    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:03:04.960882    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:03:04.960952    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:03:04.975532    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:03:04.975606    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:03:04.986049    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:03:04.986118    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:03:04.996288    9508 logs.go:276] 0 containers: []
	W0729 17:03:04.996298    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:03:04.996354    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:03:05.006562    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:03:05.006578    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:03:05.006586    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:03:05.030397    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:03:05.030406    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:03:05.041900    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:03:05.041911    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:03:05.066581    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:03:05.066589    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:03:05.078027    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:03:05.078035    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:03:05.095382    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:03:05.095393    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:03:05.134041    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:03:05.134050    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:03:05.138120    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:03:05.138128    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:03:05.162729    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:03:05.162740    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:03:05.177259    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:03:05.177270    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:03:05.188999    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:03:05.189011    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:03:05.203886    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:03:05.203897    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:03:05.215318    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:03:05.215329    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:03:05.248833    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:03:05.248846    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:03:05.262687    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:03:05.262698    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:03:05.276738    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:03:05.276748    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:03:05.288232    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:03:05.288242    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:03:07.801871    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:12.804518    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:12.804862    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:03:12.831114    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:03:12.831214    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:03:12.848691    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:03:12.848777    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:03:12.863587    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:03:12.863659    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:03:12.876018    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:03:12.876086    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:03:12.886877    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:03:12.886941    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:03:12.897523    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:03:12.897593    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:03:12.912440    9508 logs.go:276] 0 containers: []
	W0729 17:03:12.912451    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:03:12.912505    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:03:12.922894    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:03:12.922910    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:03:12.922916    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:03:12.942609    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:03:12.942621    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:03:12.962802    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:03:12.962811    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:03:12.976682    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:03:12.976692    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:03:12.992871    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:03:12.992882    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:03:13.008941    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:03:13.008955    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:03:13.020427    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:03:13.020440    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:03:13.032493    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:03:13.032503    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:03:13.056305    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:03:13.056312    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:03:13.077682    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:03:13.077695    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:03:13.089762    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:03:13.089772    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:03:13.126443    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:03:13.126451    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:03:13.163635    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:03:13.163646    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:03:13.181861    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:03:13.181870    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:03:13.199877    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:03:13.199887    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:03:13.203811    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:03:13.203817    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:03:13.228677    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:03:13.228691    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:03:15.745409    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:20.747805    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:20.747945    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:03:20.765369    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:03:20.765454    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:03:20.778265    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:03:20.778345    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:03:20.788914    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:03:20.788978    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:03:20.799478    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:03:20.799551    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:03:20.809201    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:03:20.809269    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:03:20.819487    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:03:20.819556    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:03:20.829863    9508 logs.go:276] 0 containers: []
	W0729 17:03:20.829873    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:03:20.829935    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:03:20.840039    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:03:20.840057    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:03:20.840062    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:03:20.857983    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:03:20.857995    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:03:20.869815    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:03:20.869828    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:03:20.883952    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:03:20.883966    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:03:20.895450    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:03:20.895460    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:03:20.931613    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:03:20.931628    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:03:20.946199    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:03:20.946216    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:03:20.982750    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:03:20.982758    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:03:21.002323    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:03:21.002336    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:03:21.027622    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:03:21.027633    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:03:21.039924    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:03:21.039938    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:03:21.051337    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:03:21.051351    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:03:21.064480    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:03:21.064497    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:03:21.069039    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:03:21.069045    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:03:21.080624    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:03:21.080636    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:03:21.098560    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:03:21.098574    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:03:21.124830    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:03:21.124839    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:03:23.642162    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:28.644476    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:28.644624    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:03:28.663901    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:03:28.663982    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:03:28.675056    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:03:28.675128    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:03:28.685300    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:03:28.685362    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:03:28.695876    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:03:28.695946    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:03:28.706221    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:03:28.706287    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:03:28.718824    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:03:28.718898    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:03:28.729050    9508 logs.go:276] 0 containers: []
	W0729 17:03:28.729063    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:03:28.729127    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:03:28.740135    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:03:28.740155    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:03:28.740161    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:03:28.755259    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:03:28.755270    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:03:28.767022    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:03:28.767038    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:03:28.778480    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:03:28.778491    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:03:28.782483    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:03:28.782490    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:03:28.796096    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:03:28.796106    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:03:28.808231    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:03:28.808241    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:03:28.833837    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:03:28.833846    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:03:28.853929    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:03:28.853938    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:03:28.867872    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:03:28.867881    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:03:28.879342    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:03:28.879356    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:03:28.896433    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:03:28.896443    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:03:28.920662    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:03:28.920671    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:03:28.931857    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:03:28.931872    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:03:28.944889    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:03:28.944900    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:03:28.985374    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:03:28.985385    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:03:29.022462    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:03:29.022472    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:03:31.538501    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:36.539142    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:36.539348    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:03:36.557014    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:03:36.557102    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:03:36.570807    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:03:36.570882    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:03:36.582189    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:03:36.582263    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:03:36.592232    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:03:36.592299    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:03:36.607513    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:03:36.607579    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:03:36.619088    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:03:36.619159    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:03:36.630510    9508 logs.go:276] 0 containers: []
	W0729 17:03:36.630523    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:03:36.630586    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:03:36.642663    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:03:36.642681    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:03:36.642686    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:03:36.656981    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:03:36.656991    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:03:36.668638    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:03:36.668650    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:03:36.680394    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:03:36.680405    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:03:36.697953    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:03:36.697963    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:03:36.722546    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:03:36.722553    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:03:36.759906    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:03:36.759916    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:03:36.774030    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:03:36.774046    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:03:36.799366    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:03:36.799376    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:03:36.817463    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:03:36.817476    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:03:36.829406    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:03:36.829417    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:03:36.864426    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:03:36.864436    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:03:36.875621    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:03:36.875631    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:03:36.886865    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:03:36.886877    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:03:36.891057    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:03:36.891066    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:03:36.909137    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:03:36.909150    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:03:36.924462    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:03:36.924475    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:03:39.441485    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:44.443839    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:44.443979    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:03:44.459893    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:03:44.459974    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:03:44.470786    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:03:44.470853    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:03:44.481163    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:03:44.481234    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:03:44.491376    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:03:44.491445    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:03:44.502171    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:03:44.502242    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:03:44.512939    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:03:44.513005    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:03:44.530219    9508 logs.go:276] 0 containers: []
	W0729 17:03:44.530232    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:03:44.530296    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:03:44.540627    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:03:44.540650    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:03:44.540655    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:03:44.555116    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:03:44.555127    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:03:44.566723    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:03:44.566736    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:03:44.584664    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:03:44.584674    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:03:44.597177    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:03:44.597188    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:03:44.634197    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:03:44.634205    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:03:44.638068    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:03:44.638076    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:03:44.663068    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:03:44.663079    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:03:44.674417    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:03:44.674428    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:03:44.686329    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:03:44.686338    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:03:44.700894    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:03:44.700906    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:03:44.715507    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:03:44.715519    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:03:44.726845    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:03:44.726857    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:03:44.749818    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:03:44.749825    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:03:44.788513    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:03:44.788526    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:03:44.803040    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:03:44.803055    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:03:44.815015    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:03:44.815028    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:03:47.330370    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:03:52.332218    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:03:52.332488    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:03:52.357858    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:03:52.357983    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:03:52.376769    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:03:52.376850    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:03:52.389613    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:03:52.389683    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:03:52.401711    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:03:52.401776    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:03:52.412642    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:03:52.412704    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:03:52.423450    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:03:52.423515    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:03:52.433963    9508 logs.go:276] 0 containers: []
	W0729 17:03:52.433975    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:03:52.434036    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:03:52.444348    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:03:52.444365    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:03:52.444371    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:03:52.462963    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:03:52.462977    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:03:52.478510    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:03:52.478521    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:03:52.482490    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:03:52.482496    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:03:52.494096    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:03:52.494108    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:03:52.507412    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:03:52.507421    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:03:52.542760    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:03:52.542775    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:03:52.557407    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:03:52.557420    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:03:52.568315    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:03:52.568330    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:03:52.583367    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:03:52.583378    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:03:52.595149    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:03:52.595161    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:03:52.606772    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:03:52.606788    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:03:52.630426    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:03:52.630435    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:03:52.643771    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:03:52.643779    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:03:52.662300    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:03:52.662309    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:03:52.679461    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:03:52.679471    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:03:52.716718    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:03:52.716725    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:03:55.244157    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:00.246446    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:00.246649    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:00.273341    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:04:00.273462    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:00.290863    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:04:00.290940    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:00.303558    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:04:00.303633    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:00.315124    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:04:00.315189    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:00.325699    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:04:00.325770    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:00.337301    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:04:00.337365    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:00.347558    9508 logs.go:276] 0 containers: []
	W0729 17:04:00.347570    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:00.347624    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:00.358676    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:04:00.358694    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:04:00.358700    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:00.371899    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:04:00.371909    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:04:00.399011    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:04:00.399022    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:04:00.415598    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:04:00.415611    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:04:00.430120    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:04:00.430131    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:04:00.444414    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:04:00.444429    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:04:00.470623    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:04:00.470633    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:04:00.482508    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:04:00.482518    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:04:00.493369    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:00.493383    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:00.497459    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:00.497465    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:00.536599    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:04:00.536613    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:04:00.552161    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:04:00.552174    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:04:00.563727    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:00.563740    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:00.589025    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:00.589034    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:00.628396    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:04:00.628415    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:04:00.654498    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:04:00.654509    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:04:00.671923    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:04:00.671934    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:04:03.187781    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:08.190486    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:08.190872    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:08.231276    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:04:08.231415    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:08.258822    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:04:08.258920    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:08.272868    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:04:08.272945    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:08.288336    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:04:08.288410    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:08.298951    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:04:08.299026    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:08.309410    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:04:08.309486    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:08.320258    9508 logs.go:276] 0 containers: []
	W0729 17:04:08.320270    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:08.320338    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:08.331144    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:04:08.331161    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:08.331169    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:08.369801    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:04:08.369815    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:04:08.395897    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:04:08.395909    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:04:08.412040    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:08.412052    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:08.436668    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:08.436677    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:08.478587    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:04:08.478599    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:04:08.490112    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:04:08.490125    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:04:08.501784    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:04:08.501794    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:04:08.515426    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:04:08.515438    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:04:08.527233    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:04:08.527243    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:04:08.544372    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:04:08.544386    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:04:08.563247    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:04:08.563260    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:04:08.574866    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:04:08.574879    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:08.588182    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:08.588193    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:08.592772    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:04:08.592780    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:04:08.604529    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:04:08.604541    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:04:08.618156    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:04:08.618171    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:04:11.134575    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:16.135403    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:16.135628    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:16.161545    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:04:16.161654    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:16.181577    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:04:16.181662    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:16.194693    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:04:16.194775    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:16.206252    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:04:16.206328    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:16.216211    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:04:16.216283    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:16.229652    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:04:16.229722    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:16.239817    9508 logs.go:276] 0 containers: []
	W0729 17:04:16.239829    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:16.239886    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:16.253643    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:04:16.253663    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:04:16.253670    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:04:16.267907    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:04:16.267918    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:04:16.283105    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:04:16.283119    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:04:16.297068    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:04:16.297081    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:04:16.308575    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:16.308586    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:16.344437    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:04:16.344446    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:04:16.355947    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:04:16.355957    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:04:16.367998    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:04:16.368008    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:04:16.379411    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:16.379420    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:16.403124    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:16.403135    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:16.440945    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:04:16.440959    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:04:16.457807    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:04:16.457819    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:04:16.470086    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:04:16.470099    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:04:16.491611    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:16.491622    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:16.496455    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:04:16.496464    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:04:16.521488    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:04:16.521502    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:04:16.539615    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:04:16.539626    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:19.053848    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:24.055017    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:24.055179    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:24.075026    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:04:24.075123    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:24.089785    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:04:24.089863    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:24.101877    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:04:24.101951    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:24.112373    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:04:24.112448    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:24.127207    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:04:24.127278    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:24.141690    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:04:24.141765    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:24.151621    9508 logs.go:276] 0 containers: []
	W0729 17:04:24.151636    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:24.151695    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:24.162249    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:04:24.162266    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:04:24.162274    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:04:24.176429    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:04:24.176440    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:04:24.194328    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:24.194345    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:24.233589    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:24.233597    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:24.267777    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:04:24.267787    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:04:24.293518    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:04:24.293530    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:04:24.305499    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:24.305510    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:24.309394    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:04:24.309402    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:04:24.324023    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:04:24.324034    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:04:24.335573    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:04:24.335586    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:04:24.349064    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:24.349075    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:24.373360    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:04:24.373369    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:24.386507    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:04:24.386519    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:04:24.400892    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:04:24.400903    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:04:24.416932    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:04:24.416946    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:04:24.428286    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:04:24.428298    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:04:24.440056    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:04:24.440065    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:04:26.954470    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:31.957075    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:31.957260    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:31.970980    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:04:31.971061    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:31.982877    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:04:31.982955    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:31.992856    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:04:31.992930    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:32.004169    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:04:32.004240    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:32.015344    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:04:32.015413    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:32.025923    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:04:32.025986    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:32.035853    9508 logs.go:276] 0 containers: []
	W0729 17:04:32.035865    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:32.035924    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:32.050195    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:04:32.050211    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:04:32.050217    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:04:32.064111    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:04:32.064124    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:04:32.079688    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:04:32.079701    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:04:32.093833    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:32.093847    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:32.136701    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:04:32.136715    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:04:32.151770    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:04:32.151782    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:04:32.163220    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:04:32.163234    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:04:32.184381    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:04:32.184393    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:04:32.196337    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:32.196347    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:32.219529    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:04:32.219537    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:04:32.231128    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:04:32.231141    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:04:32.242890    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:04:32.242903    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:04:32.255418    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:32.255428    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:32.293009    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:32.293019    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:32.297533    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:04:32.297539    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:04:32.322455    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:04:32.322469    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:04:32.336518    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:04:32.336528    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:34.850046    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:39.852370    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:39.852526    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:39.866185    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:04:39.866265    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:39.877510    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:04:39.877579    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:39.887927    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:04:39.888003    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:39.901352    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:04:39.901420    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:39.911659    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:04:39.911731    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:39.922385    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:04:39.922452    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:39.933121    9508 logs.go:276] 0 containers: []
	W0729 17:04:39.933132    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:39.933190    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:39.943491    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:04:39.943511    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:39.943540    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:39.982713    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:39.982723    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:40.018222    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:04:40.018233    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:04:40.030079    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:40.030091    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:40.052428    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:04:40.052436    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:04:40.066577    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:04:40.066590    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:04:40.081041    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:04:40.081054    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:04:40.094360    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:04:40.094370    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:04:40.105304    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:04:40.105316    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:04:40.119121    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:04:40.119131    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:04:40.134360    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:04:40.134370    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:04:40.145711    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:40.145722    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:40.149907    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:04:40.149916    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:04:40.175515    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:04:40.175526    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:04:40.186506    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:04:40.186520    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:04:40.198237    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:04:40.198248    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:04:40.215416    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:04:40.215426    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:42.729462    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:47.731771    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:47.731950    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:47.749201    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:04:47.749291    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:47.766384    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:04:47.766458    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:47.783127    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:04:47.783201    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:47.794253    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:04:47.794325    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:47.808685    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:04:47.808761    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:47.819018    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:04:47.819081    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:47.829328    9508 logs.go:276] 0 containers: []
	W0729 17:04:47.829341    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:47.829403    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:47.839660    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:04:47.839681    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:04:47.839686    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:04:47.851585    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:47.851598    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:47.886371    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:04:47.886385    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:04:47.915351    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:04:47.915361    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:04:47.930439    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:04:47.930452    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:04:47.944350    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:04:47.944364    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:47.955768    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:47.955781    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:47.959867    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:04:47.959876    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:04:47.974077    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:04:47.974086    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:04:47.988756    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:04:47.988768    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:04:48.005908    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:04:48.005918    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:04:48.021773    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:04:48.021786    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:04:48.037413    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:48.037426    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:48.059925    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:48.059933    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:48.096111    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:04:48.096118    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:04:48.110125    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:04:48.110140    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:04:48.121272    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:04:48.121283    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:04:50.637189    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:04:55.639522    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:04:55.639671    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:04:55.657965    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:04:55.658046    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:04:55.679610    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:04:55.679673    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:04:55.691239    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:04:55.691321    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:04:55.702817    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:04:55.702892    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:04:55.717810    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:04:55.717878    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:04:55.727878    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:04:55.727953    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:04:55.738801    9508 logs.go:276] 0 containers: []
	W0729 17:04:55.738817    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:04:55.738877    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:04:55.754352    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:04:55.754371    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:04:55.754376    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:04:55.759149    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:04:55.759155    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:04:55.793202    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:04:55.793211    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:04:55.806941    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:04:55.806952    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:04:55.831639    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:04:55.831648    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:04:55.849015    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:04:55.849024    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:04:55.862300    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:04:55.862310    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:04:55.902354    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:04:55.902365    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:04:55.918097    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:04:55.918109    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:04:55.933630    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:04:55.933639    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:04:55.951036    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:04:55.951046    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:04:55.975474    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:04:55.975482    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:04:55.989751    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:04:55.989762    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:04:56.001103    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:04:56.001115    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:04:56.014688    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:04:56.014699    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:04:56.030523    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:04:56.030535    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:04:56.045099    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:04:56.045109    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:04:58.558768    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:03.561199    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:03.561352    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:05:03.580793    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:05:03.580890    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:05:03.596929    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:05:03.597006    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:05:03.607716    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:05:03.607786    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:05:03.618218    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:05:03.618290    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:05:03.632828    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:05:03.632890    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:05:03.643695    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:05:03.643760    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:05:03.656896    9508 logs.go:276] 0 containers: []
	W0729 17:05:03.656907    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:05:03.656965    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:05:03.667365    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:05:03.667383    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:05:03.667388    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:05:03.681885    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:05:03.681895    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:05:03.707044    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:05:03.707053    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:05:03.721390    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:05:03.721401    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:05:03.732455    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:05:03.732467    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:05:03.769024    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:05:03.769032    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:05:03.773019    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:05:03.773027    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:05:03.835590    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:05:03.835603    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:05:03.854205    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:05:03.854216    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:05:03.866183    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:05:03.866196    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:05:03.883193    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:05:03.883207    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:05:03.907129    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:05:03.907137    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:05:03.919361    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:05:03.919374    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:05:03.931266    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:05:03.931279    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:05:03.950355    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:05:03.950366    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:05:03.967718    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:05:03.967730    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:05:03.981111    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:05:03.981126    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:05:06.494347    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:11.496753    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:11.496960    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:05:11.513904    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:05:11.513996    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:05:11.527131    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:05:11.527206    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:05:11.538113    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:05:11.538180    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:05:11.548532    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:05:11.548601    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:05:11.567091    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:05:11.567167    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:05:11.582090    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:05:11.582157    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:05:11.592341    9508 logs.go:276] 0 containers: []
	W0729 17:05:11.592354    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:05:11.592410    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:05:11.603149    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:05:11.603166    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:05:11.603172    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:05:11.617418    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:05:11.617429    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:05:11.629768    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:05:11.629778    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:05:11.642013    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:05:11.642025    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:05:11.654383    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:05:11.654393    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:05:11.669556    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:05:11.669567    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:05:11.683549    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:05:11.683561    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:05:11.695903    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:05:11.695916    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:05:11.717670    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:05:11.717678    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:05:11.753678    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:05:11.753694    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:05:11.778844    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:05:11.778855    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:05:11.796454    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:05:11.796463    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:05:11.810291    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:05:11.810304    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:05:11.822332    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:05:11.822346    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:05:11.860832    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:05:11.860844    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:05:11.865048    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:05:11.865057    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:05:11.876922    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:05:11.876934    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:05:14.393277    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:19.395675    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:19.395910    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:05:19.419351    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:05:19.419455    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:05:19.436942    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:05:19.437022    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:05:19.449111    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:05:19.449189    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:05:19.460487    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:05:19.460563    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:05:19.471027    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:05:19.471088    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:05:19.482226    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:05:19.482289    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:05:19.492685    9508 logs.go:276] 0 containers: []
	W0729 17:05:19.492698    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:05:19.492756    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:05:19.504279    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:05:19.504296    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:05:19.504302    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:05:19.518606    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:05:19.518618    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:05:19.532801    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:05:19.532813    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:05:19.547775    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:05:19.547789    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:05:19.560429    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:05:19.560439    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:05:19.578009    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:05:19.578018    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:05:19.589445    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:05:19.589457    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:05:19.604675    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:05:19.604689    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:05:19.630288    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:05:19.630300    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:05:19.654095    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:05:19.654105    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:05:19.668269    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:05:19.668279    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:05:19.703062    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:05:19.703074    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:05:19.718693    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:05:19.718704    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:05:19.730880    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:05:19.730895    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:05:19.735046    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:05:19.735057    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:05:19.746446    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:05:19.746461    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:05:19.758301    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:05:19.758315    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:05:22.296580    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:27.298950    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:27.299173    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:05:27.326388    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:05:27.326475    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:05:27.340438    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:05:27.340513    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:05:27.351882    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:05:27.351955    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:05:27.362306    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:05:27.362378    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:05:27.372759    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:05:27.372820    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:05:27.383126    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:05:27.383197    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:05:27.393599    9508 logs.go:276] 0 containers: []
	W0729 17:05:27.393609    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:05:27.393665    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:05:27.405538    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:05:27.405560    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:05:27.405567    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:05:27.442984    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:05:27.442992    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:05:27.458462    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:05:27.458476    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:05:27.475943    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:05:27.475956    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:05:27.487649    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:05:27.487661    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:05:27.498721    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:05:27.498731    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:05:27.503494    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:05:27.503501    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:05:27.537721    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:05:27.537736    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:05:27.552501    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:05:27.552515    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:05:27.566470    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:05:27.566483    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:05:27.577676    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:05:27.577686    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:05:27.589236    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:05:27.589245    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:05:27.603430    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:05:27.603441    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:05:27.617955    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:05:27.617966    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:05:27.630406    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:05:27.630420    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:05:27.655408    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:05:27.655418    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:05:27.670727    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:05:27.670737    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:05:30.196733    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:35.199025    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:35.199173    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:05:35.211543    9508 logs.go:276] 2 containers: [c473df7788e5 34c6ea0e3a5b]
	I0729 17:05:35.211624    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:05:35.222627    9508 logs.go:276] 2 containers: [6aeb3880051e 1f5e563256bb]
	I0729 17:05:35.222700    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:05:35.233302    9508 logs.go:276] 1 containers: [9e6fa44b574b]
	I0729 17:05:35.233364    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:05:35.243537    9508 logs.go:276] 2 containers: [fcdce0d17cd8 b4051e54596c]
	I0729 17:05:35.243612    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:05:35.253873    9508 logs.go:276] 1 containers: [92f16c346654]
	I0729 17:05:35.253950    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:05:35.264185    9508 logs.go:276] 2 containers: [5b777281d588 a3e5c6623186]
	I0729 17:05:35.264249    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:05:35.274009    9508 logs.go:276] 0 containers: []
	W0729 17:05:35.274020    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:05:35.274077    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:05:35.284054    9508 logs.go:276] 2 containers: [ec7e32dbc6da fbc6f6b59034]
	I0729 17:05:35.284072    9508 logs.go:123] Gathering logs for kube-apiserver [c473df7788e5] ...
	I0729 17:05:35.284078    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c473df7788e5"
	I0729 17:05:35.297861    9508 logs.go:123] Gathering logs for etcd [6aeb3880051e] ...
	I0729 17:05:35.297871    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aeb3880051e"
	I0729 17:05:35.311217    9508 logs.go:123] Gathering logs for storage-provisioner [fbc6f6b59034] ...
	I0729 17:05:35.311227    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6f6b59034"
	I0729 17:05:35.322006    9508 logs.go:123] Gathering logs for kube-apiserver [34c6ea0e3a5b] ...
	I0729 17:05:35.322016    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c6ea0e3a5b"
	I0729 17:05:35.346653    9508 logs.go:123] Gathering logs for coredns [9e6fa44b574b] ...
	I0729 17:05:35.346663    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6fa44b574b"
	I0729 17:05:35.358053    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:05:35.358063    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:05:35.370822    9508 logs.go:123] Gathering logs for storage-provisioner [ec7e32dbc6da] ...
	I0729 17:05:35.370833    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec7e32dbc6da"
	I0729 17:05:35.382638    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:05:35.382648    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:05:35.420959    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:05:35.420973    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:05:35.457610    9508 logs.go:123] Gathering logs for kube-scheduler [fcdce0d17cd8] ...
	I0729 17:05:35.457628    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdce0d17cd8"
	I0729 17:05:35.473664    9508 logs.go:123] Gathering logs for kube-scheduler [b4051e54596c] ...
	I0729 17:05:35.473674    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4051e54596c"
	I0729 17:05:35.488255    9508 logs.go:123] Gathering logs for kube-proxy [92f16c346654] ...
	I0729 17:05:35.488266    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f16c346654"
	I0729 17:05:35.499924    9508 logs.go:123] Gathering logs for kube-controller-manager [a3e5c6623186] ...
	I0729 17:05:35.499935    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5c6623186"
	I0729 17:05:35.514361    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:05:35.514371    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:05:35.518935    9508 logs.go:123] Gathering logs for etcd [1f5e563256bb] ...
	I0729 17:05:35.518941    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f5e563256bb"
	I0729 17:05:35.533599    9508 logs.go:123] Gathering logs for kube-controller-manager [5b777281d588] ...
	I0729 17:05:35.533612    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b777281d588"
	I0729 17:05:35.551941    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:05:35.551954    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:05:38.077201    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:43.079468    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:43.079551    9508 kubeadm.go:597] duration metric: took 4m3.595147458s to restartPrimaryControlPlane
	W0729 17:05:43.079615    9508 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 17:05:43.079641    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 17:05:44.171251    9508 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0916s)
	I0729 17:05:44.171319    9508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:05:44.176426    9508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 17:05:44.179447    9508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 17:05:44.182514    9508 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 17:05:44.182522    9508 kubeadm.go:157] found existing configuration files:
	
	I0729 17:05:44.182556    9508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/admin.conf
	I0729 17:05:44.185359    9508 kubeadm.go:163] "https://control-plane.minikube.internal:51259" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 17:05:44.185392    9508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 17:05:44.188106    9508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/kubelet.conf
	I0729 17:05:44.190956    9508 kubeadm.go:163] "https://control-plane.minikube.internal:51259" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 17:05:44.190989    9508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 17:05:44.194906    9508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/controller-manager.conf
	I0729 17:05:44.198125    9508 kubeadm.go:163] "https://control-plane.minikube.internal:51259" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 17:05:44.198153    9508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 17:05:44.201202    9508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/scheduler.conf
	I0729 17:05:44.203619    9508 kubeadm.go:163] "https://control-plane.minikube.internal:51259" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51259 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 17:05:44.203643    9508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 17:05:44.206421    9508 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 17:05:44.223174    9508 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 17:05:44.223247    9508 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 17:05:44.272896    9508 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 17:05:44.272961    9508 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 17:05:44.273016    9508 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 17:05:44.321693    9508 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 17:05:44.326943    9508 out.go:204]   - Generating certificates and keys ...
	I0729 17:05:44.326982    9508 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 17:05:44.327028    9508 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 17:05:44.327074    9508 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 17:05:44.327105    9508 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 17:05:44.327140    9508 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 17:05:44.327172    9508 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 17:05:44.327205    9508 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 17:05:44.327237    9508 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 17:05:44.327285    9508 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 17:05:44.327323    9508 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 17:05:44.327349    9508 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 17:05:44.327381    9508 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 17:05:44.399485    9508 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 17:05:44.518800    9508 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 17:05:44.715276    9508 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 17:05:44.896364    9508 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 17:05:44.928979    9508 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 17:05:44.929278    9508 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 17:05:44.929321    9508 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 17:05:44.998660    9508 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 17:05:45.002850    9508 out.go:204]   - Booting up control plane ...
	I0729 17:05:45.002935    9508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 17:05:45.002977    9508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 17:05:45.003021    9508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 17:05:45.003113    9508 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 17:05:45.003214    9508 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 17:05:49.004256    9508 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001699 seconds
	I0729 17:05:49.004318    9508 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 17:05:49.007721    9508 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 17:05:49.517774    9508 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 17:05:49.517961    9508 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-208000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 17:05:50.021204    9508 kubeadm.go:310] [bootstrap-token] Using token: 2y5tc8.9ynqoroehohyamch
	I0729 17:05:50.024951    9508 out.go:204]   - Configuring RBAC rules ...
	I0729 17:05:50.025013    9508 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 17:05:50.025054    9508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 17:05:50.028872    9508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 17:05:50.029875    9508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 17:05:50.030936    9508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 17:05:50.032109    9508 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 17:05:50.035220    9508 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 17:05:50.201708    9508 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 17:05:50.427247    9508 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 17:05:50.427933    9508 kubeadm.go:310] 
	I0729 17:05:50.428009    9508 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 17:05:50.428030    9508 kubeadm.go:310] 
	I0729 17:05:50.428086    9508 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 17:05:50.428096    9508 kubeadm.go:310] 
	I0729 17:05:50.428118    9508 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 17:05:50.428178    9508 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 17:05:50.428212    9508 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 17:05:50.428215    9508 kubeadm.go:310] 
	I0729 17:05:50.428245    9508 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 17:05:50.428248    9508 kubeadm.go:310] 
	I0729 17:05:50.428270    9508 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 17:05:50.428272    9508 kubeadm.go:310] 
	I0729 17:05:50.428300    9508 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 17:05:50.428383    9508 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 17:05:50.428525    9508 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 17:05:50.428540    9508 kubeadm.go:310] 
	I0729 17:05:50.428662    9508 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 17:05:50.428705    9508 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 17:05:50.428711    9508 kubeadm.go:310] 
	I0729 17:05:50.428752    9508 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2y5tc8.9ynqoroehohyamch \
	I0729 17:05:50.428946    9508 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0590c93eea840245319f62698163347e7b5c66f98e4c9e27c4a0315b2e5764a4 \
	I0729 17:05:50.428977    9508 kubeadm.go:310] 	--control-plane 
	I0729 17:05:50.428981    9508 kubeadm.go:310] 
	I0729 17:05:50.429053    9508 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 17:05:50.429057    9508 kubeadm.go:310] 
	I0729 17:05:50.429109    9508 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2y5tc8.9ynqoroehohyamch \
	I0729 17:05:50.429232    9508 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0590c93eea840245319f62698163347e7b5c66f98e4c9e27c4a0315b2e5764a4 
	I0729 17:05:50.429307    9508 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 17:05:50.429330    9508 cni.go:84] Creating CNI manager for ""
	I0729 17:05:50.429337    9508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:05:50.435684    9508 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 17:05:50.443027    9508 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 17:05:50.447444    9508 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 17:05:50.452385    9508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 17:05:50.452487    9508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:05:50.452490    9508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-208000 minikube.k8s.io/updated_at=2024_07_29T17_05_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500 minikube.k8s.io/name=stopped-upgrade-208000 minikube.k8s.io/primary=true
	I0729 17:05:50.501800    9508 ops.go:34] apiserver oom_adj: -16
	I0729 17:05:50.501816    9508 kubeadm.go:1113] duration metric: took 49.414ms to wait for elevateKubeSystemPrivileges
	I0729 17:05:50.501832    9508 kubeadm.go:394] duration metric: took 4m11.030389875s to StartCluster
	I0729 17:05:50.501841    9508 settings.go:142] acquiring lock: {Name:mke03e8e29c1ffe5c4cd19f776f54e7d6bc684a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:05:50.502001    9508 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:05:50.502403    9508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/kubeconfig: {Name:mk580a93ad62a9c0663fd1e6ef1bfe6feb6bde87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:05:50.502614    9508 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:05:50.502682    9508 config.go:182] Loaded profile config "stopped-upgrade-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 17:05:50.502642    9508 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 17:05:50.502730    9508 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-208000"
	I0729 17:05:50.502745    9508 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-208000"
	W0729 17:05:50.502749    9508 addons.go:243] addon storage-provisioner should already be in state true
	I0729 17:05:50.502751    9508 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-208000"
	I0729 17:05:50.502760    9508 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-208000"
	I0729 17:05:50.502762    9508 host.go:66] Checking if "stopped-upgrade-208000" exists ...
	I0729 17:05:50.503921    9508 kapi.go:59] client config for stopped-upgrade-208000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/stopped-upgrade-208000/client.key", CAFile:"/Users/jenkins/minikube-integration/19346-7076/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103f501b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 17:05:50.504041    9508 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-208000"
	W0729 17:05:50.504045    9508 addons.go:243] addon default-storageclass should already be in state true
	I0729 17:05:50.504054    9508 host.go:66] Checking if "stopped-upgrade-208000" exists ...
	I0729 17:05:50.506668    9508 out.go:177] * Verifying Kubernetes components...
	I0729 17:05:50.507332    9508 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 17:05:50.510826    9508 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 17:05:50.510836    9508 sshutil.go:53] new ssh client: &{IP:localhost Port:51224 SSHKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/stopped-upgrade-208000/id_rsa Username:docker}
	I0729 17:05:50.513632    9508 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:05:50.517660    9508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:05:50.521653    9508 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:05:50.521663    9508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 17:05:50.521672    9508 sshutil.go:53] new ssh client: &{IP:localhost Port:51224 SSHKeyPath:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/stopped-upgrade-208000/id_rsa Username:docker}
	I0729 17:05:50.601888    9508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:05:50.607404    9508 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:05:50.607466    9508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:05:50.613333    9508 api_server.go:72] duration metric: took 110.705208ms to wait for apiserver process to appear ...
	I0729 17:05:50.613346    9508 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:05:50.613355    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:05:50.645254    9508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 17:05:50.666696    9508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:05:55.615456    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:05:55.615497    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:00.615830    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:00.615863    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:05.616251    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:05.616311    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:10.616886    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:10.616910    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:15.617513    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:15.617559    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:20.618368    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:20.618389    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 17:06:20.992368    9508 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 17:06:20.996432    9508 out.go:177] * Enabled addons: storage-provisioner
	I0729 17:06:21.005380    9508 addons.go:510] duration metric: took 30.502769708s for enable addons: enabled=[storage-provisioner]
	I0729 17:06:25.619402    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:25.619452    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:30.620789    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:30.620812    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:35.622401    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:35.622441    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:40.624488    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:40.624528    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:45.626790    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:45.626849    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:50.629212    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:50.629318    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:06:50.640038    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:06:50.640111    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:06:50.651078    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:06:50.651154    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:06:50.661301    9508 logs.go:276] 2 containers: [c418e990baba bb65121be772]
	I0729 17:06:50.661372    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:06:50.671644    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:06:50.671710    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:06:50.682791    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:06:50.682867    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:06:50.693512    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:06:50.693584    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:06:50.706126    9508 logs.go:276] 0 containers: []
	W0729 17:06:50.706141    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:06:50.706206    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:06:50.720945    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:06:50.720962    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:06:50.720969    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:06:50.725616    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:06:50.725626    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:06:50.759917    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:06:50.759929    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:06:50.774716    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:06:50.774735    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:06:50.791559    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:06:50.791569    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:06:50.816012    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:06:50.816020    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:06:50.850095    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:06:50.850115    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:06:50.872428    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:06:50.872438    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:06:50.887043    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:06:50.887055    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:06:50.899111    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:06:50.899126    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:06:50.911648    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:06:50.911659    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:06:50.930217    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:06:50.930228    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:06:50.942489    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:06:50.942502    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:06:53.455593    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:06:58.457986    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:06:58.458096    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:06:58.471355    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:06:58.471428    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:06:58.482758    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:06:58.482829    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:06:58.493253    9508 logs.go:276] 2 containers: [c418e990baba bb65121be772]
	I0729 17:06:58.493317    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:06:58.503850    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:06:58.503912    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:06:58.513800    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:06:58.513871    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:06:58.524343    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:06:58.524411    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:06:58.534941    9508 logs.go:276] 0 containers: []
	W0729 17:06:58.534952    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:06:58.535006    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:06:58.545047    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:06:58.545060    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:06:58.545066    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:06:58.558867    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:06:58.558878    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:06:58.570630    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:06:58.570640    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:06:58.588002    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:06:58.588013    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:06:58.601066    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:06:58.601076    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:06:58.626109    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:06:58.626119    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:06:58.638826    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:06:58.638836    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:06:58.643713    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:06:58.643721    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:06:58.680208    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:06:58.680220    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:06:58.692102    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:06:58.692113    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:06:58.707360    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:06:58.707370    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:06:58.719436    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:06:58.719448    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:06:58.755054    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:06:58.755065    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:07:01.271797    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:06.274072    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:06.274312    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:07:06.300547    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:07:06.300666    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:07:06.317840    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:07:06.317924    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:07:06.331242    9508 logs.go:276] 2 containers: [c418e990baba bb65121be772]
	I0729 17:07:06.331317    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:07:06.342783    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:07:06.342852    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:07:06.353545    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:07:06.353619    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:07:06.364830    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:07:06.364904    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:07:06.375147    9508 logs.go:276] 0 containers: []
	W0729 17:07:06.375161    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:07:06.375219    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:07:06.385856    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:07:06.385873    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:07:06.385878    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:07:06.418649    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:07:06.418658    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:07:06.452991    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:07:06.453003    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:07:06.470742    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:07:06.470752    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:07:06.482518    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:07:06.482529    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:07:06.494646    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:07:06.494656    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:07:06.516040    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:07:06.516050    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:07:06.527613    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:07:06.527623    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:07:06.538910    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:07:06.538920    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:07:06.543615    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:07:06.543622    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:07:06.561681    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:07:06.561691    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:07:06.575827    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:07:06.575837    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:07:06.601317    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:07:06.601327    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:07:09.115223    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:14.115520    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:14.115694    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:07:14.132098    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:07:14.132187    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:07:14.144154    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:07:14.144225    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:07:14.155209    9508 logs.go:276] 2 containers: [c418e990baba bb65121be772]
	I0729 17:07:14.155280    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:07:14.170150    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:07:14.170219    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:07:14.180402    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:07:14.180467    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:07:14.190856    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:07:14.190924    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:07:14.200862    9508 logs.go:276] 0 containers: []
	W0729 17:07:14.200874    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:07:14.200932    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:07:14.211057    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:07:14.211073    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:07:14.211080    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:07:14.235742    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:07:14.235752    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:07:14.271349    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:07:14.271360    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:07:14.276793    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:07:14.276800    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:07:14.295503    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:07:14.295514    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:07:14.309676    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:07:14.309688    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:07:14.321219    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:07:14.321229    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:07:14.337070    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:07:14.337080    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:07:14.349252    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:07:14.349265    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:07:14.388933    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:07:14.388943    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:07:14.401020    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:07:14.401031    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:07:14.413652    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:07:14.413664    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:07:14.432182    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:07:14.432192    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:07:16.945793    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:21.948040    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:21.948239    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:07:21.971740    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:07:21.971848    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:07:21.988221    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:07:21.988304    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:07:22.002781    9508 logs.go:276] 2 containers: [c418e990baba bb65121be772]
	I0729 17:07:22.002850    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:07:22.013694    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:07:22.013769    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:07:22.024636    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:07:22.024705    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:07:22.034968    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:07:22.035039    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:07:22.046146    9508 logs.go:276] 0 containers: []
	W0729 17:07:22.046158    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:07:22.046215    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:07:22.062022    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:07:22.062035    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:07:22.062042    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:07:22.075824    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:07:22.075836    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:07:22.091823    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:07:22.091833    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:07:22.103824    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:07:22.103837    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:07:22.123008    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:07:22.123019    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:07:22.135660    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:07:22.135672    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:07:22.147593    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:07:22.147605    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:07:22.162118    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:07:22.162130    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:07:22.166551    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:07:22.166558    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:07:22.200353    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:07:22.200364    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:07:22.212548    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:07:22.212564    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:07:22.224071    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:07:22.224083    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:07:22.249722    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:07:22.249736    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:07:24.786518    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:29.788919    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:29.789046    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:07:29.801167    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:07:29.801250    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:07:29.812041    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:07:29.812116    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:07:29.822799    9508 logs.go:276] 2 containers: [c418e990baba bb65121be772]
	I0729 17:07:29.822875    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:07:29.834083    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:07:29.834154    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:07:29.844300    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:07:29.844372    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:07:29.854823    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:07:29.854897    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:07:29.865134    9508 logs.go:276] 0 containers: []
	W0729 17:07:29.865148    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:07:29.865217    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:07:29.875849    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:07:29.875864    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:07:29.875870    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:07:29.910294    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:07:29.910307    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:07:29.924493    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:07:29.924504    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:07:29.936356    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:07:29.936370    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:07:29.948072    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:07:29.948082    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:07:29.967001    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:07:29.967011    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:07:29.971896    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:07:29.971903    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:07:30.008209    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:07:30.008225    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:07:30.023050    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:07:30.023064    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:07:30.034656    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:07:30.034666    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:07:30.050082    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:07:30.050098    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:07:30.061481    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:07:30.061495    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:07:30.086448    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:07:30.086459    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:07:32.600443    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:37.602692    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:37.602801    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:07:37.613662    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:07:37.613736    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:07:37.624157    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:07:37.624227    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:07:37.637289    9508 logs.go:276] 2 containers: [c418e990baba bb65121be772]
	I0729 17:07:37.637364    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:07:37.647932    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:07:37.648003    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:07:37.662086    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:07:37.662156    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:07:37.672584    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:07:37.672650    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:07:37.682971    9508 logs.go:276] 0 containers: []
	W0729 17:07:37.682983    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:07:37.683045    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:07:37.693585    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:07:37.693601    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:07:37.693607    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:07:37.730247    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:07:37.730258    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:07:37.744962    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:07:37.744976    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:07:37.758740    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:07:37.758750    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:07:37.783301    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:07:37.783310    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:07:37.794497    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:07:37.794511    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:07:37.829917    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:07:37.829943    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:07:37.844106    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:07:37.844118    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:07:37.856995    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:07:37.857007    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:07:37.873816    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:07:37.873832    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:07:37.889043    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:07:37.889052    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:07:37.908155    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:07:37.908172    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:07:37.920893    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:07:37.920904    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:07:40.427777    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:45.430065    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:45.430320    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:07:45.450157    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:07:45.450242    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:07:45.463665    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:07:45.463734    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:07:45.475725    9508 logs.go:276] 2 containers: [c418e990baba bb65121be772]
	I0729 17:07:45.475794    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:07:45.486278    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:07:45.486346    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:07:45.500628    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:07:45.500700    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:07:45.511390    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:07:45.511455    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:07:45.522078    9508 logs.go:276] 0 containers: []
	W0729 17:07:45.522088    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:07:45.522145    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:07:45.534101    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:07:45.534115    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:07:45.534121    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:07:45.538238    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:07:45.538247    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:07:45.573494    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:07:45.573505    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:07:45.585163    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:07:45.585173    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:07:45.596742    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:07:45.596751    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:07:45.613973    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:07:45.613983    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:07:45.625356    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:07:45.625366    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:07:45.658740    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:07:45.658748    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:07:45.672628    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:07:45.672639    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:07:45.685262    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:07:45.685272    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:07:45.701240    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:07:45.701252    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:07:45.716127    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:07:45.716138    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:07:45.744269    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:07:45.744282    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:07:48.261404    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:07:53.263798    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:07:53.263970    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:07:53.280826    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:07:53.280908    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:07:53.293651    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:07:53.293712    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:07:53.304851    9508 logs.go:276] 2 containers: [c418e990baba bb65121be772]
	I0729 17:07:53.304914    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:07:53.315970    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:07:53.316038    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:07:53.327032    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:07:53.327092    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:07:53.337840    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:07:53.337901    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:07:53.348714    9508 logs.go:276] 0 containers: []
	W0729 17:07:53.348734    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:07:53.348797    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:07:53.359824    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:07:53.359840    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:07:53.359846    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:07:53.371667    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:07:53.371680    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:07:53.406840    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:07:53.406853    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:07:53.415706    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:07:53.415719    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:07:53.451366    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:07:53.451379    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:07:53.467085    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:07:53.467097    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:07:53.479531    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:07:53.479545    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:07:53.491825    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:07:53.491837    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:07:53.510736    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:07:53.510743    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:07:53.537265    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:07:53.537280    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:07:53.550222    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:07:53.550233    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:07:53.565713    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:07:53.565728    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:07:53.579784    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:07:53.579794    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:07:56.099501    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:01.101849    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:01.102013    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:01.117663    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:08:01.117748    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:01.129498    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:08:01.129569    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:01.140293    9508 logs.go:276] 2 containers: [c418e990baba bb65121be772]
	I0729 17:08:01.140365    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:01.151572    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:08:01.151642    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:01.162560    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:08:01.162630    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:01.173380    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:08:01.173446    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:01.184327    9508 logs.go:276] 0 containers: []
	W0729 17:08:01.184338    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:01.184396    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:01.195335    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:08:01.195349    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:08:01.195354    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:01.207806    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:01.207816    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:01.244177    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:08:01.244187    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:08:01.257631    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:08:01.257642    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:08:01.270687    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:08:01.270700    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:08:01.286470    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:08:01.286481    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:08:01.298273    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:08:01.298284    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:08:01.316726    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:01.316738    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:01.340231    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:01.340241    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:01.375309    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:01.375330    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:01.380606    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:08:01.380617    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:08:01.399327    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:08:01.399337    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:08:01.414934    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:08:01.414948    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:08:03.930160    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:08.932570    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:08.932763    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:08.947553    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:08:08.947627    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:08.968465    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:08:08.968535    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:08.979846    9508 logs.go:276] 4 containers: [fc524074e68d aa960a5013d0 c418e990baba bb65121be772]
	I0729 17:08:08.979926    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:08.991094    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:08:08.991166    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:09.001668    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:08:09.001734    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:09.012158    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:08:09.012221    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:09.022842    9508 logs.go:276] 0 containers: []
	W0729 17:08:09.022852    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:09.022910    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:09.034242    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:08:09.034260    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:08:09.034265    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:09.046660    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:08:09.046676    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:08:09.066089    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:09.066099    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:09.100154    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:08:09.100174    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:08:09.115173    9508 logs.go:123] Gathering logs for coredns [aa960a5013d0] ...
	I0729 17:08:09.115186    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa960a5013d0"
	I0729 17:08:09.126528    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:08:09.126539    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:08:09.141954    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:09.141964    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:09.166067    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:09.166074    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:09.169925    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:09.169935    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:09.208649    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:08:09.208661    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:08:09.225360    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:08:09.225376    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:08:09.238200    9508 logs.go:123] Gathering logs for coredns [fc524074e68d] ...
	I0729 17:08:09.238212    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc524074e68d"
	I0729 17:08:09.251094    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:08:09.251107    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:08:09.268687    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:08:09.268699    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:08:09.281586    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:08:09.281600    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:08:11.796472    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:16.798969    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:16.799192    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:16.818201    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:08:16.818299    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:16.832289    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:08:16.832371    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:16.846679    9508 logs.go:276] 4 containers: [fc524074e68d aa960a5013d0 c418e990baba bb65121be772]
	I0729 17:08:16.846757    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:16.857471    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:08:16.857537    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:16.868072    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:08:16.868137    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:16.884928    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:08:16.884996    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:16.897191    9508 logs.go:276] 0 containers: []
	W0729 17:08:16.897203    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:16.897267    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:16.908336    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:08:16.908356    9508 logs.go:123] Gathering logs for coredns [aa960a5013d0] ...
	I0729 17:08:16.908361    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa960a5013d0"
	I0729 17:08:16.919835    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:08:16.919846    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:08:16.938755    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:16.938765    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:16.965228    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:08:16.965242    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:08:16.980066    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:08:16.980077    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:08:16.999717    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:08:16.999734    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:17.011851    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:17.011865    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:17.046390    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:17.046401    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:17.097635    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:08:17.097646    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:08:17.112904    9508 logs.go:123] Gathering logs for coredns [fc524074e68d] ...
	I0729 17:08:17.112914    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc524074e68d"
	I0729 17:08:17.126436    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:17.126445    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:17.131196    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:08:17.131205    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:08:17.147241    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:08:17.147252    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:08:17.163201    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:08:17.163213    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:08:17.176261    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:08:17.176272    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:08:19.691696    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:24.693960    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:24.694175    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:24.711272    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:08:24.711362    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:24.725064    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:08:24.725139    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:24.736086    9508 logs.go:276] 4 containers: [fc524074e68d aa960a5013d0 c418e990baba bb65121be772]
	I0729 17:08:24.736163    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:24.748328    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:08:24.748400    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:24.758388    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:08:24.758459    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:24.768851    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:08:24.768922    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:24.779145    9508 logs.go:276] 0 containers: []
	W0729 17:08:24.779164    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:24.779227    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:24.790021    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:08:24.790039    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:24.790045    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:24.794148    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:24.794156    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:24.830307    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:08:24.830320    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:08:24.842658    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:08:24.842671    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:08:24.854938    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:24.854951    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:24.892108    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:08:24.892126    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:08:24.905598    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:24.905609    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:24.932282    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:08:24.932290    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:08:24.951970    9508 logs.go:123] Gathering logs for coredns [fc524074e68d] ...
	I0729 17:08:24.951982    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc524074e68d"
	I0729 17:08:24.970186    9508 logs.go:123] Gathering logs for coredns [aa960a5013d0] ...
	I0729 17:08:24.970199    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa960a5013d0"
	I0729 17:08:24.986257    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:08:24.986267    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:08:25.002994    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:08:25.003006    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:08:25.021437    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:08:25.021451    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:25.034068    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:08:25.034078    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:08:25.054429    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:08:25.054440    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:08:27.568741    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:32.571017    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:32.571202    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:32.585823    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:08:32.585908    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:32.596781    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:08:32.596851    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:32.607442    9508 logs.go:276] 4 containers: [fc524074e68d aa960a5013d0 c418e990baba bb65121be772]
	I0729 17:08:32.607511    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:32.617916    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:08:32.617980    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:32.627986    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:08:32.628051    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:32.638796    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:08:32.638866    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:32.648835    9508 logs.go:276] 0 containers: []
	W0729 17:08:32.648846    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:32.648898    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:32.663673    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:08:32.663690    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:08:32.663695    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:08:32.678126    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:08:32.678137    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:08:32.690099    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:08:32.690110    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:08:32.701266    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:32.701276    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:32.726829    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:08:32.726845    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:32.739342    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:32.739352    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:32.775720    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:08:32.775730    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:08:32.795515    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:08:32.795527    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:08:32.812535    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:08:32.812549    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:08:32.826267    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:32.826280    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:32.863764    9508 logs.go:123] Gathering logs for coredns [fc524074e68d] ...
	I0729 17:08:32.863777    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc524074e68d"
	I0729 17:08:32.883760    9508 logs.go:123] Gathering logs for coredns [aa960a5013d0] ...
	I0729 17:08:32.883772    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa960a5013d0"
	I0729 17:08:32.895776    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:08:32.895788    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:08:32.908994    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:32.909006    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:32.913655    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:08:32.913665    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:08:35.434363    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:40.436647    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:40.436858    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:40.454135    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:08:40.454222    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:40.465602    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:08:40.465671    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:40.476014    9508 logs.go:276] 4 containers: [fc524074e68d aa960a5013d0 c418e990baba bb65121be772]
	I0729 17:08:40.476089    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:40.486711    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:08:40.486785    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:40.497507    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:08:40.497580    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:40.512247    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:08:40.512319    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:40.522944    9508 logs.go:276] 0 containers: []
	W0729 17:08:40.522955    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:40.523012    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:40.533534    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:08:40.533551    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:08:40.533556    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:08:40.547658    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:08:40.547670    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:08:40.559671    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:08:40.559681    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:08:40.571686    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:40.571695    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:40.597545    9508 logs.go:123] Gathering logs for coredns [aa960a5013d0] ...
	I0729 17:08:40.597561    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa960a5013d0"
	I0729 17:08:40.610238    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:08:40.610250    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:08:40.627531    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:08:40.627547    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:40.640225    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:40.640235    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:40.677205    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:08:40.677214    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:08:40.690380    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:08:40.690391    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:08:40.702205    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:40.702216    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:40.706683    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:40.706693    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:40.745131    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:08:40.745143    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:08:40.760724    9508 logs.go:123] Gathering logs for coredns [fc524074e68d] ...
	I0729 17:08:40.760735    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc524074e68d"
	I0729 17:08:40.773299    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:08:40.773310    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:08:43.293711    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:48.296121    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:48.296263    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:48.308516    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:08:48.308597    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:48.319458    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:08:48.319526    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:48.332162    9508 logs.go:276] 4 containers: [fc524074e68d aa960a5013d0 c418e990baba bb65121be772]
	I0729 17:08:48.332233    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:48.342970    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:08:48.343037    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:48.353538    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:08:48.353603    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:48.364101    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:08:48.364157    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:48.374610    9508 logs.go:276] 0 containers: []
	W0729 17:08:48.374624    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:48.374679    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:48.385532    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:08:48.385548    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:48.385553    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:48.390074    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:08:48.390083    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:08:48.403824    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:48.403834    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:48.429023    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:08:48.429037    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:48.441672    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:08:48.441683    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:08:48.457872    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:08:48.457882    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:08:48.471034    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:08:48.471043    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:08:48.488601    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:08:48.488615    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:08:48.501153    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:08:48.501168    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:08:48.514254    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:48.514265    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:48.550927    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:48.550942    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:48.588696    9508 logs.go:123] Gathering logs for coredns [aa960a5013d0] ...
	I0729 17:08:48.588707    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa960a5013d0"
	I0729 17:08:48.601957    9508 logs.go:123] Gathering logs for coredns [fc524074e68d] ...
	I0729 17:08:48.601969    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc524074e68d"
	I0729 17:08:48.618824    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:08:48.618834    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:08:48.631004    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:08:48.631014    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:08:51.151860    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:08:56.154166    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:08:56.154458    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:08:56.184003    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:08:56.184133    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:08:56.202196    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:08:56.202291    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:08:56.215503    9508 logs.go:276] 4 containers: [fc524074e68d aa960a5013d0 c418e990baba bb65121be772]
	I0729 17:08:56.215580    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:08:56.228019    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:08:56.228083    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:08:56.238076    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:08:56.238149    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:08:56.249496    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:08:56.249570    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:08:56.270179    9508 logs.go:276] 0 containers: []
	W0729 17:08:56.270190    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:08:56.270238    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:08:56.283104    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:08:56.283120    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:08:56.283126    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:08:56.303729    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:08:56.303738    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:08:56.316118    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:08:56.316126    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:08:56.331342    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:08:56.331357    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:08:56.346668    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:08:56.346680    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:08:56.365790    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:08:56.365804    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:08:56.379152    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:08:56.379159    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:08:56.416025    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:08:56.416033    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:08:56.443982    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:08:56.443998    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:08:56.457670    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:08:56.457684    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:08:56.474531    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:08:56.474542    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:08:56.487068    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:08:56.487079    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:08:56.492054    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:08:56.492066    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:08:56.530315    9508 logs.go:123] Gathering logs for coredns [fc524074e68d] ...
	I0729 17:08:56.530326    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc524074e68d"
	I0729 17:08:56.542580    9508 logs.go:123] Gathering logs for coredns [aa960a5013d0] ...
	I0729 17:08:56.542593    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa960a5013d0"
	I0729 17:08:59.068862    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:04.071182    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:04.071313    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:04.084945    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:09:04.085031    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:04.105595    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:09:04.105664    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:04.116892    9508 logs.go:276] 4 containers: [fc524074e68d aa960a5013d0 c418e990baba bb65121be772]
	I0729 17:09:04.116973    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:04.129485    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:09:04.129562    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:04.140939    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:09:04.141018    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:04.152696    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:09:04.152771    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:04.164043    9508 logs.go:276] 0 containers: []
	W0729 17:09:04.164057    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:04.164118    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:04.179507    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:09:04.179529    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:09:04.179535    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:04.193296    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:09:04.193309    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:09:04.209020    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:09:04.209034    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:09:04.222177    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:09:04.222188    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:09:04.235597    9508 logs.go:123] Gathering logs for coredns [fc524074e68d] ...
	I0729 17:09:04.235609    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc524074e68d"
	I0729 17:09:04.248210    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:09:04.248221    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:09:04.261509    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:09:04.261520    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:09:04.278526    9508 logs.go:123] Gathering logs for coredns [aa960a5013d0] ...
	I0729 17:09:04.278533    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa960a5013d0"
	I0729 17:09:04.290850    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:09:04.290863    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:09:04.303950    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:04.303961    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:09:04.330326    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:04.330337    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:04.366367    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:04.366387    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:04.371882    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:04.371894    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:04.409350    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:09:04.409369    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:09:04.426764    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:09:04.426779    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:09:06.948518    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:11.950874    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:11.951102    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:11.964897    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:09:11.964973    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:11.975929    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:09:11.975996    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:11.986945    9508 logs.go:276] 4 containers: [fc524074e68d aa960a5013d0 c418e990baba bb65121be772]
	I0729 17:09:11.987021    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:11.998274    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:09:11.998354    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:12.009732    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:09:12.009804    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:12.020990    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:09:12.021054    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:12.031675    9508 logs.go:276] 0 containers: []
	W0729 17:09:12.031688    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:12.031744    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:12.043532    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:09:12.043554    9508 logs.go:123] Gathering logs for coredns [fc524074e68d] ...
	I0729 17:09:12.043560    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc524074e68d"
	I0729 17:09:12.056396    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:09:12.056409    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:09:12.068956    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:09:12.068967    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:12.081510    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:09:12.081522    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:09:12.094366    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:09:12.094377    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:09:12.107330    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:09:12.107342    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:09:12.126352    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:12.126367    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:09:12.152943    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:12.152960    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:12.195473    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:09:12.195493    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:09:12.210585    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:09:12.210595    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:09:12.226373    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:12.226380    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:12.262618    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:12.262628    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:12.267313    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:09:12.267322    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:09:12.285289    9508 logs.go:123] Gathering logs for coredns [aa960a5013d0] ...
	I0729 17:09:12.285300    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa960a5013d0"
	I0729 17:09:12.297280    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:09:12.297292    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:09:14.813161    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:19.815481    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:19.815663    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:19.828419    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:09:19.828491    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:19.838866    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:09:19.838933    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:19.849666    9508 logs.go:276] 4 containers: [fc524074e68d aa960a5013d0 c418e990baba bb65121be772]
	I0729 17:09:19.849748    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:19.869781    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:09:19.869845    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:19.882058    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:09:19.882127    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:19.894828    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:09:19.894898    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:19.906522    9508 logs.go:276] 0 containers: []
	W0729 17:09:19.906534    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:19.906593    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:19.918348    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:09:19.918369    9508 logs.go:123] Gathering logs for coredns [fc524074e68d] ...
	I0729 17:09:19.918375    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc524074e68d"
	I0729 17:09:19.930588    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:19.930600    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:09:19.955570    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:09:19.955587    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:19.968455    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:09:19.968466    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:09:19.984726    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:09:19.984736    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:09:20.001912    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:09:20.001922    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:09:20.014406    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:09:20.014416    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:09:20.026957    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:09:20.026970    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:09:20.050378    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:20.050393    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:20.087795    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:20.087808    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:20.125771    9508 logs.go:123] Gathering logs for coredns [aa960a5013d0] ...
	I0729 17:09:20.125782    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa960a5013d0"
	I0729 17:09:20.138155    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:09:20.138171    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:09:20.150629    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:20.150640    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:20.155772    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:09:20.155782    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:09:20.171338    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:09:20.171348    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:09:22.691956    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:27.694547    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:27.694720    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:27.709177    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:09:27.709264    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:27.721404    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:09:27.721471    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:27.732672    9508 logs.go:276] 4 containers: [fc524074e68d aa960a5013d0 c418e990baba bb65121be772]
	I0729 17:09:27.732733    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:27.744116    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:09:27.744182    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:27.755729    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:09:27.755789    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:27.767297    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:09:27.767349    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:27.778620    9508 logs.go:276] 0 containers: []
	W0729 17:09:27.778630    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:27.778675    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:27.790057    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:09:27.790071    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:27.790077    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:27.824664    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:27.824678    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:27.829341    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:09:27.829352    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:09:27.844143    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:09:27.844155    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:09:27.860687    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:09:27.860696    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:09:27.875556    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:09:27.875573    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:09:27.887899    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:09:27.887911    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:09:27.906820    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:27.906833    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:09:27.933155    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:09:27.933170    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:27.946537    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:27.946549    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:27.987130    9508 logs.go:123] Gathering logs for coredns [fc524074e68d] ...
	I0729 17:09:27.987144    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc524074e68d"
	I0729 17:09:28.001146    9508 logs.go:123] Gathering logs for coredns [aa960a5013d0] ...
	I0729 17:09:28.001157    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa960a5013d0"
	I0729 17:09:28.013409    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:09:28.013416    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:09:28.028048    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:09:28.028058    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:09:28.045000    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:09:28.045011    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:09:30.560126    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:35.562493    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:35.562729    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:35.577683    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:09:35.577815    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:35.593627    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:09:35.593695    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:35.604172    9508 logs.go:276] 4 containers: [fc524074e68d aa960a5013d0 c418e990baba bb65121be772]
	I0729 17:09:35.604237    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:35.615230    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:09:35.615295    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:35.626730    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:09:35.626796    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:35.638014    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:09:35.638078    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:35.649591    9508 logs.go:276] 0 containers: []
	W0729 17:09:35.649600    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:35.649655    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:35.660931    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:09:35.660946    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:35.660951    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:35.699379    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:09:35.699389    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:09:35.712425    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:09:35.712437    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:09:35.729639    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:35.729654    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:09:35.755306    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:35.755322    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:35.791205    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:35.791227    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:35.796106    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:09:35.796115    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:09:35.811138    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:09:35.811150    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:09:35.824348    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:09:35.824360    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:09:35.846346    9508 logs.go:123] Gathering logs for coredns [aa960a5013d0] ...
	I0729 17:09:35.846354    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa960a5013d0"
	I0729 17:09:35.859405    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:09:35.859417    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:09:35.872805    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:09:35.872816    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:09:35.887043    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:09:35.887055    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:09:35.902504    9508 logs.go:123] Gathering logs for coredns [fc524074e68d] ...
	I0729 17:09:35.902518    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc524074e68d"
	I0729 17:09:35.915576    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:09:35.915588    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:38.430376    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:43.432838    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:43.433241    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 17:09:43.469905    9508 logs.go:276] 1 containers: [b897e5b5de90]
	I0729 17:09:43.470045    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 17:09:43.490033    9508 logs.go:276] 1 containers: [b871b506e993]
	I0729 17:09:43.490127    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 17:09:43.508964    9508 logs.go:276] 4 containers: [fc524074e68d aa960a5013d0 c418e990baba bb65121be772]
	I0729 17:09:43.509038    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 17:09:43.523019    9508 logs.go:276] 1 containers: [318c84570864]
	I0729 17:09:43.523091    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 17:09:43.534503    9508 logs.go:276] 1 containers: [762543dd9ba3]
	I0729 17:09:43.534575    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 17:09:43.546175    9508 logs.go:276] 1 containers: [4fecc2f1acc5]
	I0729 17:09:43.546248    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 17:09:43.559201    9508 logs.go:276] 0 containers: []
	W0729 17:09:43.559213    9508 logs.go:278] No container was found matching "kindnet"
	I0729 17:09:43.559274    9508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 17:09:43.571106    9508 logs.go:276] 1 containers: [42c3fafd3498]
	I0729 17:09:43.571133    9508 logs.go:123] Gathering logs for storage-provisioner [42c3fafd3498] ...
	I0729 17:09:43.571139    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c3fafd3498"
	I0729 17:09:43.585164    9508 logs.go:123] Gathering logs for describe nodes ...
	I0729 17:09:43.585178    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 17:09:43.622855    9508 logs.go:123] Gathering logs for etcd [b871b506e993] ...
	I0729 17:09:43.622872    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b871b506e993"
	I0729 17:09:43.638368    9508 logs.go:123] Gathering logs for coredns [bb65121be772] ...
	I0729 17:09:43.638384    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65121be772"
	I0729 17:09:43.651051    9508 logs.go:123] Gathering logs for kube-apiserver [b897e5b5de90] ...
	I0729 17:09:43.651062    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b897e5b5de90"
	I0729 17:09:43.666394    9508 logs.go:123] Gathering logs for coredns [fc524074e68d] ...
	I0729 17:09:43.666408    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc524074e68d"
	I0729 17:09:43.678865    9508 logs.go:123] Gathering logs for kube-proxy [762543dd9ba3] ...
	I0729 17:09:43.678876    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 762543dd9ba3"
	I0729 17:09:43.691392    9508 logs.go:123] Gathering logs for coredns [aa960a5013d0] ...
	I0729 17:09:43.691403    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa960a5013d0"
	I0729 17:09:43.704120    9508 logs.go:123] Gathering logs for kube-scheduler [318c84570864] ...
	I0729 17:09:43.704131    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 318c84570864"
	I0729 17:09:43.720918    9508 logs.go:123] Gathering logs for container status ...
	I0729 17:09:43.720931    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 17:09:43.734315    9508 logs.go:123] Gathering logs for kube-controller-manager [4fecc2f1acc5] ...
	I0729 17:09:43.734323    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fecc2f1acc5"
	I0729 17:09:43.753266    9508 logs.go:123] Gathering logs for Docker ...
	I0729 17:09:43.753279    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 17:09:43.779213    9508 logs.go:123] Gathering logs for kubelet ...
	I0729 17:09:43.779225    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 17:09:43.815887    9508 logs.go:123] Gathering logs for dmesg ...
	I0729 17:09:43.815904    9508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 17:09:43.820761    9508 logs.go:123] Gathering logs for coredns [c418e990baba] ...
	I0729 17:09:43.820773    9508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c418e990baba"
	I0729 17:09:46.334799    9508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 17:09:51.337118    9508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 17:09:51.341559    9508 out.go:177] 
	W0729 17:09:51.346438    9508 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 17:09:51.346450    9508 out.go:239] * 
	* 
	W0729 17:09:51.347035    9508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:09:51.358449    9508 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-208000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (582.72s)

                                                
                                    
x
+
TestPause/serial/Start (9.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-246000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-246000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.764752084s)

                                                
                                                
-- stdout --
	* [pause-246000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-246000" primary control-plane node in "pause-246000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-246000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-246000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-246000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-246000 -n pause-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-246000 -n pause-246000: exit status 7 (50.280542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-757000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-757000 --driver=qemu2 : exit status 80 (9.720745375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-757000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-757000" primary control-plane node in "NoKubernetes-757000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-757000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-757000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-757000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-757000 -n NoKubernetes-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-757000 -n NoKubernetes-757000: exit status 7 (65.547917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-757000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-757000 --no-kubernetes --driver=qemu2 : exit status 80 (5.259269917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-757000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-757000
	* Restarting existing qemu2 VM for "NoKubernetes-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-757000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-757000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-757000 -n NoKubernetes-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-757000 -n NoKubernetes-757000: exit status 7 (49.632375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-757000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-757000 --no-kubernetes --driver=qemu2 : exit status 80 (5.229998792s)

                                                
                                                
-- stdout --
	* [NoKubernetes-757000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-757000
	* Restarting existing qemu2 VM for "NoKubernetes-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-757000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-757000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-757000 -n NoKubernetes-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-757000 -n NoKubernetes-757000: exit status 7 (31.834833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-757000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-757000 --driver=qemu2 : exit status 80 (6.619208375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-757000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-757000
	* Restarting existing qemu2 VM for "NoKubernetes-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-757000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-757000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-757000 -n NoKubernetes-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-757000 -n NoKubernetes-757000: exit status 7 (56.187583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (6.68s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.58s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.58s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.92s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19346
- KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4145700062/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.114050417s)

                                                
                                                
-- stdout --
	* [kindnet-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-561000" primary control-plane node in "kindnet-561000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-561000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:11:57.190095   10058 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:11:57.190246   10058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:11:57.190249   10058 out.go:304] Setting ErrFile to fd 2...
	I0729 17:11:57.190251   10058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:11:57.190396   10058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:11:57.191439   10058 out.go:298] Setting JSON to false
	I0729 17:11:57.207478   10058 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6084,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:11:57.207544   10058 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:11:57.213556   10058 out.go:177] * [kindnet-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:11:57.222502   10058 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:11:57.222545   10058 notify.go:220] Checking for updates...
	I0729 17:11:57.231447   10058 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:11:57.234480   10058 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:11:57.237420   10058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:11:57.240504   10058 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:11:57.243550   10058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:11:57.245456   10058 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:11:57.245534   10058 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:11:57.245581   10058 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:11:57.249494   10058 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:11:57.256384   10058 start.go:297] selected driver: qemu2
	I0729 17:11:57.256392   10058 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:11:57.256400   10058 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:11:57.258853   10058 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:11:57.262523   10058 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:11:57.266631   10058 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:11:57.266678   10058 cni.go:84] Creating CNI manager for "kindnet"
	I0729 17:11:57.266686   10058 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 17:11:57.266714   10058 start.go:340] cluster config:
	{Name:kindnet-561000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:11:57.270441   10058 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:11:57.278518   10058 out.go:177] * Starting "kindnet-561000" primary control-plane node in "kindnet-561000" cluster
	I0729 17:11:57.282558   10058 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:11:57.282576   10058 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:11:57.282589   10058 cache.go:56] Caching tarball of preloaded images
	I0729 17:11:57.282669   10058 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:11:57.282674   10058 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:11:57.282738   10058 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/kindnet-561000/config.json ...
	I0729 17:11:57.282749   10058 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/kindnet-561000/config.json: {Name:mkd77c2f797882723aeaa3710aed02f5885ddfab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:11:57.282968   10058 start.go:360] acquireMachinesLock for kindnet-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:11:57.283002   10058 start.go:364] duration metric: took 28.875µs to acquireMachinesLock for "kindnet-561000"
	I0729 17:11:57.283013   10058 start.go:93] Provisioning new machine with config: &{Name:kindnet-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:11:57.283041   10058 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:11:57.291564   10058 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:11:57.309348   10058 start.go:159] libmachine.API.Create for "kindnet-561000" (driver="qemu2")
	I0729 17:11:57.309375   10058 client.go:168] LocalClient.Create starting
	I0729 17:11:57.309440   10058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:11:57.309467   10058 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:57.309476   10058 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:57.309520   10058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:11:57.309542   10058 main.go:141] libmachine: Decoding PEM data...
	I0729 17:11:57.309549   10058 main.go:141] libmachine: Parsing certificate...
	I0729 17:11:57.309894   10058 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:11:57.460755   10058 main.go:141] libmachine: Creating SSH key...
	I0729 17:11:57.674622   10058 main.go:141] libmachine: Creating Disk image...
	I0729 17:11:57.674629   10058 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:11:57.674883   10058 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/disk.qcow2
	I0729 17:11:57.684867   10058 main.go:141] libmachine: STDOUT: 
	I0729 17:11:57.684884   10058 main.go:141] libmachine: STDERR: 
	I0729 17:11:57.684941   10058 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/disk.qcow2 +20000M
	I0729 17:11:57.692893   10058 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:11:57.692906   10058 main.go:141] libmachine: STDERR: 
	I0729 17:11:57.692918   10058 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/disk.qcow2
	I0729 17:11:57.692923   10058 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:11:57.692943   10058 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:11:57.692964   10058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:44:b3:06:5d:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/disk.qcow2
	I0729 17:11:57.694533   10058 main.go:141] libmachine: STDOUT: 
	I0729 17:11:57.694549   10058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:11:57.694566   10058 client.go:171] duration metric: took 385.182875ms to LocalClient.Create
	I0729 17:11:59.696734   10058 start.go:128] duration metric: took 2.413677s to createHost
	I0729 17:11:59.696780   10058 start.go:83] releasing machines lock for "kindnet-561000", held for 2.413768292s
	W0729 17:11:59.696858   10058 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:11:59.713009   10058 out.go:177] * Deleting "kindnet-561000" in qemu2 ...
	W0729 17:11:59.739326   10058 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:11:59.739356   10058 start.go:729] Will try again in 5 seconds ...
	I0729 17:12:04.741593   10058 start.go:360] acquireMachinesLock for kindnet-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:12:04.742077   10058 start.go:364] duration metric: took 343.791µs to acquireMachinesLock for "kindnet-561000"
	I0729 17:12:04.742209   10058 start.go:93] Provisioning new machine with config: &{Name:kindnet-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:12:04.742475   10058 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:12:04.752190   10058 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:12:04.802354   10058 start.go:159] libmachine.API.Create for "kindnet-561000" (driver="qemu2")
	I0729 17:12:04.802413   10058 client.go:168] LocalClient.Create starting
	I0729 17:12:04.802519   10058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:12:04.802579   10058 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:04.802606   10058 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:04.802669   10058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:12:04.802714   10058 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:04.802724   10058 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:04.803297   10058 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:12:04.964728   10058 main.go:141] libmachine: Creating SSH key...
	I0729 17:12:05.214150   10058 main.go:141] libmachine: Creating Disk image...
	I0729 17:12:05.214164   10058 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:12:05.214393   10058 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/disk.qcow2
	I0729 17:12:05.224299   10058 main.go:141] libmachine: STDOUT: 
	I0729 17:12:05.224318   10058 main.go:141] libmachine: STDERR: 
	I0729 17:12:05.224376   10058 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/disk.qcow2 +20000M
	I0729 17:12:05.232397   10058 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:12:05.232410   10058 main.go:141] libmachine: STDERR: 
	I0729 17:12:05.232423   10058 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/disk.qcow2
	I0729 17:12:05.232427   10058 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:12:05.232439   10058 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:12:05.232474   10058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:ae:3e:3c:cb:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kindnet-561000/disk.qcow2
	I0729 17:12:05.234123   10058 main.go:141] libmachine: STDOUT: 
	I0729 17:12:05.234138   10058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:12:05.234151   10058 client.go:171] duration metric: took 431.734292ms to LocalClient.Create
	I0729 17:12:07.236509   10058 start.go:128] duration metric: took 2.493924667s to createHost
	I0729 17:12:07.236621   10058 start.go:83] releasing machines lock for "kindnet-561000", held for 2.494520125s
	W0729 17:12:07.237003   10058 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:12:07.246579   10058 out.go:177] 
	W0729 17:12:07.251276   10058 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:12:07.251329   10058 out.go:239] * 
	* 
	W0729 17:12:07.253698   10058 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:12:07.262219   10058 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.96425125s)

                                                
                                                
-- stdout --
	* [auto-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-561000" primary control-plane node in "auto-561000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-561000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:12:09.604450   10183 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:12:09.604564   10183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:12:09.604567   10183 out.go:304] Setting ErrFile to fd 2...
	I0729 17:12:09.604577   10183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:12:09.604696   10183 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:12:09.605802   10183 out.go:298] Setting JSON to false
	I0729 17:12:09.622237   10183 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6096,"bootTime":1722292233,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:12:09.622324   10183 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:12:09.628787   10183 out.go:177] * [auto-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:12:09.636706   10183 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:12:09.636759   10183 notify.go:220] Checking for updates...
	I0729 17:12:09.644785   10183 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:12:09.649956   10183 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:12:09.652785   10183 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:12:09.655802   10183 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:12:09.658776   10183 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:12:09.662034   10183 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:12:09.662111   10183 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:12:09.662154   10183 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:12:09.665758   10183 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:12:09.672727   10183 start.go:297] selected driver: qemu2
	I0729 17:12:09.672732   10183 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:12:09.672741   10183 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:12:09.675097   10183 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:12:09.679726   10183 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:12:09.682782   10183 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:12:09.682818   10183 cni.go:84] Creating CNI manager for ""
	I0729 17:12:09.682826   10183 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:12:09.682831   10183 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:12:09.682868   10183 start.go:340] cluster config:
	{Name:auto-561000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:12:09.686614   10183 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:12:09.694780   10183 out.go:177] * Starting "auto-561000" primary control-plane node in "auto-561000" cluster
	I0729 17:12:09.698717   10183 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:12:09.698731   10183 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:12:09.698741   10183 cache.go:56] Caching tarball of preloaded images
	I0729 17:12:09.698796   10183 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:12:09.698802   10183 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:12:09.698866   10183 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/auto-561000/config.json ...
	I0729 17:12:09.698877   10183 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/auto-561000/config.json: {Name:mk11c416daf10463bd67533b9611229a045db6c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:12:09.699274   10183 start.go:360] acquireMachinesLock for auto-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:12:09.699310   10183 start.go:364] duration metric: took 29.334µs to acquireMachinesLock for "auto-561000"
	I0729 17:12:09.699320   10183 start.go:93] Provisioning new machine with config: &{Name:auto-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:12:09.699354   10183 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:12:09.703684   10183 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:12:09.722733   10183 start.go:159] libmachine.API.Create for "auto-561000" (driver="qemu2")
	I0729 17:12:09.722760   10183 client.go:168] LocalClient.Create starting
	I0729 17:12:09.722827   10183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:12:09.722855   10183 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:09.722864   10183 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:09.722906   10183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:12:09.722930   10183 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:09.722937   10183 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:09.723452   10183 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:12:09.876318   10183 main.go:141] libmachine: Creating SSH key...
	I0729 17:12:09.932521   10183 main.go:141] libmachine: Creating Disk image...
	I0729 17:12:09.932526   10183 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:12:09.932766   10183 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/disk.qcow2
	I0729 17:12:09.941859   10183 main.go:141] libmachine: STDOUT: 
	I0729 17:12:09.941885   10183 main.go:141] libmachine: STDERR: 
	I0729 17:12:09.941927   10183 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/disk.qcow2 +20000M
	I0729 17:12:09.949852   10183 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:12:09.949874   10183 main.go:141] libmachine: STDERR: 
	I0729 17:12:09.949891   10183 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/disk.qcow2
	I0729 17:12:09.949895   10183 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:12:09.949907   10183 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:12:09.949930   10183 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:0e:06:9b:5c:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/disk.qcow2
	I0729 17:12:09.951557   10183 main.go:141] libmachine: STDOUT: 
	I0729 17:12:09.951578   10183 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:12:09.951604   10183 client.go:171] duration metric: took 228.840041ms to LocalClient.Create
	I0729 17:12:11.953819   10183 start.go:128] duration metric: took 2.2544355s to createHost
	I0729 17:12:11.953911   10183 start.go:83] releasing machines lock for "auto-561000", held for 2.254592958s
	W0729 17:12:11.954025   10183 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:12:11.966124   10183 out.go:177] * Deleting "auto-561000" in qemu2 ...
	W0729 17:12:11.997730   10183 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:12:11.997768   10183 start.go:729] Will try again in 5 seconds ...
	I0729 17:12:17.000134   10183 start.go:360] acquireMachinesLock for auto-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:12:17.000650   10183 start.go:364] duration metric: took 354.75µs to acquireMachinesLock for "auto-561000"
	I0729 17:12:17.000797   10183 start.go:93] Provisioning new machine with config: &{Name:auto-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:12:17.001108   10183 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:12:17.017927   10183 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:12:17.067849   10183 start.go:159] libmachine.API.Create for "auto-561000" (driver="qemu2")
	I0729 17:12:17.067901   10183 client.go:168] LocalClient.Create starting
	I0729 17:12:17.068009   10183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:12:17.068073   10183 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:17.068087   10183 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:17.068166   10183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:12:17.068212   10183 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:17.068253   10183 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:17.068847   10183 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:12:17.229304   10183 main.go:141] libmachine: Creating SSH key...
	I0729 17:12:17.475157   10183 main.go:141] libmachine: Creating Disk image...
	I0729 17:12:17.475166   10183 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:12:17.475445   10183 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/disk.qcow2
	I0729 17:12:17.485323   10183 main.go:141] libmachine: STDOUT: 
	I0729 17:12:17.485344   10183 main.go:141] libmachine: STDERR: 
	I0729 17:12:17.485391   10183 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/disk.qcow2 +20000M
	I0729 17:12:17.493431   10183 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:12:17.493446   10183 main.go:141] libmachine: STDERR: 
	I0729 17:12:17.493456   10183 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/disk.qcow2
	I0729 17:12:17.493460   10183 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:12:17.493468   10183 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:12:17.493505   10183 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:95:1a:c6:30:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/auto-561000/disk.qcow2
	I0729 17:12:17.495134   10183 main.go:141] libmachine: STDOUT: 
	I0729 17:12:17.495149   10183 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:12:17.495161   10183 client.go:171] duration metric: took 427.255833ms to LocalClient.Create
	I0729 17:12:19.497339   10183 start.go:128] duration metric: took 2.496204833s to createHost
	I0729 17:12:19.497410   10183 start.go:83] releasing machines lock for "auto-561000", held for 2.496736958s
	W0729 17:12:19.497760   10183 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:12:19.511434   10183 out.go:177] 
	W0729 17:12:19.515493   10183 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:12:19.515555   10183 out.go:239] * 
	* 
	W0729 17:12:19.518378   10183 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:12:19.526289   10183 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.907279042s)

                                                
                                                
-- stdout --
	* [flannel-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-561000" primary control-plane node in "flannel-561000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-561000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:12:21.734530   10296 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:12:21.734654   10296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:12:21.734657   10296 out.go:304] Setting ErrFile to fd 2...
	I0729 17:12:21.734659   10296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:12:21.734794   10296 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:12:21.735885   10296 out.go:298] Setting JSON to false
	I0729 17:12:21.752090   10296 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6108,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:12:21.752174   10296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:12:21.758115   10296 out.go:177] * [flannel-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:12:21.766043   10296 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:12:21.766111   10296 notify.go:220] Checking for updates...
	I0729 17:12:21.771965   10296 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:12:21.775029   10296 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:12:21.779018   10296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:12:21.781996   10296 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:12:21.785062   10296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:12:21.788427   10296 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:12:21.788503   10296 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:12:21.788549   10296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:12:21.793014   10296 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:12:21.800100   10296 start.go:297] selected driver: qemu2
	I0729 17:12:21.800106   10296 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:12:21.800112   10296 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:12:21.802534   10296 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:12:21.805039   10296 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:12:21.809017   10296 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:12:21.809035   10296 cni.go:84] Creating CNI manager for "flannel"
	I0729 17:12:21.809038   10296 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0729 17:12:21.809075   10296 start.go:340] cluster config:
	{Name:flannel-561000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:12:21.812986   10296 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:12:21.818989   10296 out.go:177] * Starting "flannel-561000" primary control-plane node in "flannel-561000" cluster
	I0729 17:12:21.823023   10296 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:12:21.823040   10296 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:12:21.823058   10296 cache.go:56] Caching tarball of preloaded images
	I0729 17:12:21.823125   10296 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:12:21.823130   10296 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:12:21.823195   10296 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/flannel-561000/config.json ...
	I0729 17:12:21.823207   10296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/flannel-561000/config.json: {Name:mk9984c7b975458d9862401212997e1f130f3bd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:12:21.823590   10296 start.go:360] acquireMachinesLock for flannel-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:12:21.823625   10296 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "flannel-561000"
	I0729 17:12:21.823636   10296 start.go:93] Provisioning new machine with config: &{Name:flannel-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:12:21.823663   10296 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:12:21.828000   10296 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:12:21.845172   10296 start.go:159] libmachine.API.Create for "flannel-561000" (driver="qemu2")
	I0729 17:12:21.845201   10296 client.go:168] LocalClient.Create starting
	I0729 17:12:21.845266   10296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:12:21.845302   10296 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:21.845314   10296 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:21.845350   10296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:12:21.845375   10296 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:21.845387   10296 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:21.845776   10296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:12:21.997356   10296 main.go:141] libmachine: Creating SSH key...
	I0729 17:12:22.197406   10296 main.go:141] libmachine: Creating Disk image...
	I0729 17:12:22.197415   10296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:12:22.197637   10296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/disk.qcow2
	I0729 17:12:22.207375   10296 main.go:141] libmachine: STDOUT: 
	I0729 17:12:22.207397   10296 main.go:141] libmachine: STDERR: 
	I0729 17:12:22.207449   10296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/disk.qcow2 +20000M
	I0729 17:12:22.215273   10296 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:12:22.215287   10296 main.go:141] libmachine: STDERR: 
	I0729 17:12:22.215306   10296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/disk.qcow2
	I0729 17:12:22.215311   10296 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:12:22.215321   10296 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:12:22.215343   10296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:db:25:ec:02:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/disk.qcow2
	I0729 17:12:22.216897   10296 main.go:141] libmachine: STDOUT: 
	I0729 17:12:22.216911   10296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:12:22.216929   10296 client.go:171] duration metric: took 371.723125ms to LocalClient.Create
	I0729 17:12:24.219100   10296 start.go:128] duration metric: took 2.395417584s to createHost
	I0729 17:12:24.219148   10296 start.go:83] releasing machines lock for "flannel-561000", held for 2.395513167s
	W0729 17:12:24.219232   10296 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:12:24.233232   10296 out.go:177] * Deleting "flannel-561000" in qemu2 ...
	W0729 17:12:24.259369   10296 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:12:24.259438   10296 start.go:729] Will try again in 5 seconds ...
	I0729 17:12:29.261672   10296 start.go:360] acquireMachinesLock for flannel-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:12:29.262124   10296 start.go:364] duration metric: took 363.667µs to acquireMachinesLock for "flannel-561000"
	I0729 17:12:29.262272   10296 start.go:93] Provisioning new machine with config: &{Name:flannel-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:12:29.262633   10296 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:12:29.280425   10296 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:12:29.329418   10296 start.go:159] libmachine.API.Create for "flannel-561000" (driver="qemu2")
	I0729 17:12:29.329474   10296 client.go:168] LocalClient.Create starting
	I0729 17:12:29.329607   10296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:12:29.329672   10296 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:29.329696   10296 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:29.329766   10296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:12:29.329811   10296 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:29.329825   10296 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:29.330345   10296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:12:29.492574   10296 main.go:141] libmachine: Creating SSH key...
	I0729 17:12:29.549680   10296 main.go:141] libmachine: Creating Disk image...
	I0729 17:12:29.549685   10296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:12:29.549905   10296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/disk.qcow2
	I0729 17:12:29.559103   10296 main.go:141] libmachine: STDOUT: 
	I0729 17:12:29.559120   10296 main.go:141] libmachine: STDERR: 
	I0729 17:12:29.559175   10296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/disk.qcow2 +20000M
	I0729 17:12:29.566928   10296 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:12:29.566942   10296 main.go:141] libmachine: STDERR: 
	I0729 17:12:29.566954   10296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/disk.qcow2
	I0729 17:12:29.566958   10296 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:12:29.566976   10296 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:12:29.567011   10296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:9d:5e:27:aa:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/flannel-561000/disk.qcow2
	I0729 17:12:29.568632   10296 main.go:141] libmachine: STDOUT: 
	I0729 17:12:29.568644   10296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:12:29.568663   10296 client.go:171] duration metric: took 239.183417ms to LocalClient.Create
	I0729 17:12:31.570841   10296 start.go:128] duration metric: took 2.308182541s to createHost
	I0729 17:12:31.570904   10296 start.go:83] releasing machines lock for "flannel-561000", held for 2.308753625s
	W0729 17:12:31.571262   10296 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:12:31.582522   10296 out.go:177] 
	W0729 17:12:31.589541   10296 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:12:31.589580   10296 out.go:239] * 
	* 
	W0729 17:12:31.592159   10296 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:12:31.599499   10296 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.92256325s)

                                                
                                                
-- stdout --
	* [enable-default-cni-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-561000" primary control-plane node in "enable-default-cni-561000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-561000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:12:34.014334   10417 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:12:34.014480   10417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:12:34.014483   10417 out.go:304] Setting ErrFile to fd 2...
	I0729 17:12:34.014486   10417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:12:34.014633   10417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:12:34.015695   10417 out.go:298] Setting JSON to false
	I0729 17:12:34.031793   10417 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6121,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:12:34.031858   10417 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:12:34.037955   10417 out.go:177] * [enable-default-cni-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:12:34.045852   10417 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:12:34.045900   10417 notify.go:220] Checking for updates...
	I0729 17:12:34.052824   10417 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:12:34.055861   10417 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:12:34.059884   10417 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:12:34.062877   10417 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:12:34.065848   10417 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:12:34.069269   10417 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:12:34.069342   10417 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:12:34.069390   10417 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:12:34.073839   10417 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:12:34.080875   10417 start.go:297] selected driver: qemu2
	I0729 17:12:34.080883   10417 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:12:34.080892   10417 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:12:34.083345   10417 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:12:34.086825   10417 out.go:177] * Automatically selected the socket_vmnet network
	E0729 17:12:34.089884   10417 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0729 17:12:34.089896   10417 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:12:34.089925   10417 cni.go:84] Creating CNI manager for "bridge"
	I0729 17:12:34.089929   10417 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:12:34.089959   10417 start.go:340] cluster config:
	{Name:enable-default-cni-561000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:12:34.093744   10417 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:12:34.101855   10417 out.go:177] * Starting "enable-default-cni-561000" primary control-plane node in "enable-default-cni-561000" cluster
	I0729 17:12:34.105881   10417 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:12:34.105916   10417 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:12:34.105935   10417 cache.go:56] Caching tarball of preloaded images
	I0729 17:12:34.106007   10417 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:12:34.106013   10417 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:12:34.106072   10417 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/enable-default-cni-561000/config.json ...
	I0729 17:12:34.106083   10417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/enable-default-cni-561000/config.json: {Name:mkef1ddfcf526a43ea79d4ef87a7b99054152b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:12:34.106319   10417 start.go:360] acquireMachinesLock for enable-default-cni-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:12:34.106358   10417 start.go:364] duration metric: took 30.083µs to acquireMachinesLock for "enable-default-cni-561000"
	I0729 17:12:34.106370   10417 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:12:34.106410   10417 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:12:34.113841   10417 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:12:34.131828   10417 start.go:159] libmachine.API.Create for "enable-default-cni-561000" (driver="qemu2")
	I0729 17:12:34.131859   10417 client.go:168] LocalClient.Create starting
	I0729 17:12:34.131930   10417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:12:34.131961   10417 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:34.131971   10417 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:34.132007   10417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:12:34.132035   10417 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:34.132047   10417 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:34.132417   10417 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:12:34.283233   10417 main.go:141] libmachine: Creating SSH key...
	I0729 17:12:34.335611   10417 main.go:141] libmachine: Creating Disk image...
	I0729 17:12:34.335616   10417 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:12:34.335815   10417 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/disk.qcow2
	I0729 17:12:34.344913   10417 main.go:141] libmachine: STDOUT: 
	I0729 17:12:34.344930   10417 main.go:141] libmachine: STDERR: 
	I0729 17:12:34.344969   10417 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/disk.qcow2 +20000M
	I0729 17:12:34.352700   10417 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:12:34.352714   10417 main.go:141] libmachine: STDERR: 
	I0729 17:12:34.352725   10417 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/disk.qcow2
	I0729 17:12:34.352738   10417 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:12:34.352751   10417 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:12:34.352777   10417 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:42:cc:d6:b4:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/disk.qcow2
	I0729 17:12:34.354414   10417 main.go:141] libmachine: STDOUT: 
	I0729 17:12:34.354430   10417 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:12:34.354447   10417 client.go:171] duration metric: took 222.582458ms to LocalClient.Create
	I0729 17:12:36.356618   10417 start.go:128] duration metric: took 2.250189042s to createHost
	I0729 17:12:36.356686   10417 start.go:83] releasing machines lock for "enable-default-cni-561000", held for 2.250318541s
	W0729 17:12:36.356768   10417 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:12:36.367596   10417 out.go:177] * Deleting "enable-default-cni-561000" in qemu2 ...
	W0729 17:12:36.398044   10417 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:12:36.398082   10417 start.go:729] Will try again in 5 seconds ...
	I0729 17:12:41.400237   10417 start.go:360] acquireMachinesLock for enable-default-cni-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:12:41.400717   10417 start.go:364] duration metric: took 397.541µs to acquireMachinesLock for "enable-default-cni-561000"
	I0729 17:12:41.400838   10417 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:12:41.401124   10417 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:12:41.410728   10417 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:12:41.461100   10417 start.go:159] libmachine.API.Create for "enable-default-cni-561000" (driver="qemu2")
	I0729 17:12:41.461158   10417 client.go:168] LocalClient.Create starting
	I0729 17:12:41.461284   10417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:12:41.461354   10417 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:41.461370   10417 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:41.461429   10417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:12:41.461473   10417 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:41.461482   10417 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:41.462065   10417 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:12:41.622962   10417 main.go:141] libmachine: Creating SSH key...
	I0729 17:12:41.844844   10417 main.go:141] libmachine: Creating Disk image...
	I0729 17:12:41.844852   10417 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:12:41.845101   10417 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/disk.qcow2
	I0729 17:12:41.854933   10417 main.go:141] libmachine: STDOUT: 
	I0729 17:12:41.854954   10417 main.go:141] libmachine: STDERR: 
	I0729 17:12:41.855008   10417 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/disk.qcow2 +20000M
	I0729 17:12:41.862997   10417 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:12:41.863012   10417 main.go:141] libmachine: STDERR: 
	I0729 17:12:41.863023   10417 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/disk.qcow2
	I0729 17:12:41.863027   10417 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:12:41.863038   10417 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:12:41.863071   10417 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:01:7f:ce:89:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/enable-default-cni-561000/disk.qcow2
	I0729 17:12:41.864730   10417 main.go:141] libmachine: STDOUT: 
	I0729 17:12:41.864746   10417 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:12:41.864758   10417 client.go:171] duration metric: took 403.59575ms to LocalClient.Create
	I0729 17:12:43.866953   10417 start.go:128] duration metric: took 2.46579175s to createHost
	I0729 17:12:43.867035   10417 start.go:83] releasing machines lock for "enable-default-cni-561000", held for 2.466294459s
	W0729 17:12:43.867453   10417 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:12:43.881204   10417 out.go:177] 
	W0729 17:12:43.885160   10417 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:12:43.885191   10417 out.go:239] * 
	* 
	W0729 17:12:43.887898   10417 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:12:43.895195   10417 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.014689042s)

                                                
                                                
-- stdout --
	* [bridge-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-561000" primary control-plane node in "bridge-561000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-561000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:12:46.101781   10528 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:12:46.101938   10528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:12:46.101941   10528 out.go:304] Setting ErrFile to fd 2...
	I0729 17:12:46.101943   10528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:12:46.102081   10528 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:12:46.103138   10528 out.go:298] Setting JSON to false
	I0729 17:12:46.119431   10528 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6133,"bootTime":1722292233,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:12:46.119500   10528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:12:46.125784   10528 out.go:177] * [bridge-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:12:46.133836   10528 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:12:46.133869   10528 notify.go:220] Checking for updates...
	I0729 17:12:46.141751   10528 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:12:46.143224   10528 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:12:46.146781   10528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:12:46.149862   10528 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:12:46.152796   10528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:12:46.156163   10528 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:12:46.156232   10528 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:12:46.156281   10528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:12:46.160726   10528 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:12:46.167786   10528 start.go:297] selected driver: qemu2
	I0729 17:12:46.167794   10528 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:12:46.167801   10528 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:12:46.170205   10528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:12:46.173810   10528 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:12:46.176937   10528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:12:46.176957   10528 cni.go:84] Creating CNI manager for "bridge"
	I0729 17:12:46.176965   10528 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:12:46.177004   10528 start.go:340] cluster config:
	{Name:bridge-561000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:12:46.180788   10528 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:12:46.188779   10528 out.go:177] * Starting "bridge-561000" primary control-plane node in "bridge-561000" cluster
	I0729 17:12:46.192758   10528 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:12:46.192777   10528 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:12:46.192791   10528 cache.go:56] Caching tarball of preloaded images
	I0729 17:12:46.192858   10528 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:12:46.192865   10528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:12:46.192932   10528 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/bridge-561000/config.json ...
	I0729 17:12:46.192951   10528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/bridge-561000/config.json: {Name:mkf5f3889188ea87c3bc893d290d5833ba34c3ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:12:46.193182   10528 start.go:360] acquireMachinesLock for bridge-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:12:46.193218   10528 start.go:364] duration metric: took 29.625µs to acquireMachinesLock for "bridge-561000"
	I0729 17:12:46.193229   10528 start.go:93] Provisioning new machine with config: &{Name:bridge-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:12:46.193266   10528 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:12:46.201756   10528 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:12:46.220185   10528 start.go:159] libmachine.API.Create for "bridge-561000" (driver="qemu2")
	I0729 17:12:46.220213   10528 client.go:168] LocalClient.Create starting
	I0729 17:12:46.220277   10528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:12:46.220310   10528 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:46.220321   10528 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:46.220358   10528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:12:46.220382   10528 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:46.220394   10528 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:46.220777   10528 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:12:46.371131   10528 main.go:141] libmachine: Creating SSH key...
	I0729 17:12:46.510696   10528 main.go:141] libmachine: Creating Disk image...
	I0729 17:12:46.510702   10528 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:12:46.510925   10528 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/disk.qcow2
	I0729 17:12:46.520432   10528 main.go:141] libmachine: STDOUT: 
	I0729 17:12:46.520451   10528 main.go:141] libmachine: STDERR: 
	I0729 17:12:46.520497   10528 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/disk.qcow2 +20000M
	I0729 17:12:46.528232   10528 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:12:46.528246   10528 main.go:141] libmachine: STDERR: 
	I0729 17:12:46.528260   10528 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/disk.qcow2
	I0729 17:12:46.528264   10528 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:12:46.528279   10528 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:12:46.528305   10528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:a4:0d:93:84:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/disk.qcow2
	I0729 17:12:46.529894   10528 main.go:141] libmachine: STDOUT: 
	I0729 17:12:46.529911   10528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:12:46.529930   10528 client.go:171] duration metric: took 309.712167ms to LocalClient.Create
	I0729 17:12:48.532133   10528 start.go:128] duration metric: took 2.338845s to createHost
	I0729 17:12:48.532190   10528 start.go:83] releasing machines lock for "bridge-561000", held for 2.338963458s
	W0729 17:12:48.532252   10528 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:12:48.549526   10528 out.go:177] * Deleting "bridge-561000" in qemu2 ...
	W0729 17:12:48.575898   10528 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:12:48.575954   10528 start.go:729] Will try again in 5 seconds ...
	I0729 17:12:53.578149   10528 start.go:360] acquireMachinesLock for bridge-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:12:53.578572   10528 start.go:364] duration metric: took 338.208µs to acquireMachinesLock for "bridge-561000"
	I0729 17:12:53.578671   10528 start.go:93] Provisioning new machine with config: &{Name:bridge-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:12:53.579049   10528 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:12:53.587696   10528 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:12:53.638821   10528 start.go:159] libmachine.API.Create for "bridge-561000" (driver="qemu2")
	I0729 17:12:53.638870   10528 client.go:168] LocalClient.Create starting
	I0729 17:12:53.638966   10528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:12:53.639051   10528 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:53.639070   10528 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:53.639138   10528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:12:53.639183   10528 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:53.639198   10528 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:53.639712   10528 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:12:53.800272   10528 main.go:141] libmachine: Creating SSH key...
	I0729 17:12:54.024110   10528 main.go:141] libmachine: Creating Disk image...
	I0729 17:12:54.024121   10528 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:12:54.024368   10528 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/disk.qcow2
	I0729 17:12:54.033661   10528 main.go:141] libmachine: STDOUT: 
	I0729 17:12:54.033683   10528 main.go:141] libmachine: STDERR: 
	I0729 17:12:54.033735   10528 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/disk.qcow2 +20000M
	I0729 17:12:54.041655   10528 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:12:54.041680   10528 main.go:141] libmachine: STDERR: 
	I0729 17:12:54.041701   10528 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/disk.qcow2
	I0729 17:12:54.041717   10528 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:12:54.041731   10528 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:12:54.041772   10528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:8a:77:e1:a4:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/bridge-561000/disk.qcow2
	I0729 17:12:54.043426   10528 main.go:141] libmachine: STDOUT: 
	I0729 17:12:54.043441   10528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:12:54.043455   10528 client.go:171] duration metric: took 404.580625ms to LocalClient.Create
	I0729 17:12:56.045682   10528 start.go:128] duration metric: took 2.466565292s to createHost
	I0729 17:12:56.045782   10528 start.go:83] releasing machines lock for "bridge-561000", held for 2.467184167s
	W0729 17:12:56.046204   10528 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:12:56.055845   10528 out.go:177] 
	W0729 17:12:56.062953   10528 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:12:56.063009   10528 out.go:239] * 
	* 
	W0729 17:12:56.065769   10528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:12:56.073788   10528 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.90515575s)

                                                
                                                
-- stdout --
	* [kubenet-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-561000" primary control-plane node in "kubenet-561000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-561000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:12:58.221122   10641 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:12:58.221447   10641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:12:58.221477   10641 out.go:304] Setting ErrFile to fd 2...
	I0729 17:12:58.221483   10641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:12:58.221677   10641 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:12:58.222986   10641 out.go:298] Setting JSON to false
	I0729 17:12:58.239439   10641 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6145,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:12:58.239513   10641 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:12:58.245826   10641 out.go:177] * [kubenet-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:12:58.253826   10641 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:12:58.253878   10641 notify.go:220] Checking for updates...
	I0729 17:12:58.260764   10641 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:12:58.263792   10641 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:12:58.267658   10641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:12:58.270722   10641 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:12:58.273778   10641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:12:58.277090   10641 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:12:58.277159   10641 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:12:58.277209   10641 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:12:58.280783   10641 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:12:58.286737   10641 start.go:297] selected driver: qemu2
	I0729 17:12:58.286747   10641 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:12:58.286754   10641 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:12:58.289122   10641 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:12:58.293748   10641 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:12:58.296906   10641 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:12:58.296949   10641 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0729 17:12:58.296977   10641 start.go:340] cluster config:
	{Name:kubenet-561000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:12:58.300663   10641 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:12:58.308754   10641 out.go:177] * Starting "kubenet-561000" primary control-plane node in "kubenet-561000" cluster
	I0729 17:12:58.312601   10641 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:12:58.312621   10641 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:12:58.312635   10641 cache.go:56] Caching tarball of preloaded images
	I0729 17:12:58.312705   10641 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:12:58.312711   10641 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:12:58.312777   10641 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/kubenet-561000/config.json ...
	I0729 17:12:58.312791   10641 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/kubenet-561000/config.json: {Name:mk18d9faabc3f9d301533a3b5bcda8306cebb11f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:12:58.313016   10641 start.go:360] acquireMachinesLock for kubenet-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:12:58.313052   10641 start.go:364] duration metric: took 29.791µs to acquireMachinesLock for "kubenet-561000"
	I0729 17:12:58.313062   10641 start.go:93] Provisioning new machine with config: &{Name:kubenet-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:12:58.313104   10641 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:12:58.321619   10641 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:12:58.340016   10641 start.go:159] libmachine.API.Create for "kubenet-561000" (driver="qemu2")
	I0729 17:12:58.340040   10641 client.go:168] LocalClient.Create starting
	I0729 17:12:58.340100   10641 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:12:58.340130   10641 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:58.340139   10641 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:58.340176   10641 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:12:58.340199   10641 main.go:141] libmachine: Decoding PEM data...
	I0729 17:12:58.340208   10641 main.go:141] libmachine: Parsing certificate...
	I0729 17:12:58.340572   10641 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:12:58.491685   10641 main.go:141] libmachine: Creating SSH key...
	I0729 17:12:58.628741   10641 main.go:141] libmachine: Creating Disk image...
	I0729 17:12:58.628747   10641 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:12:58.628977   10641 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/disk.qcow2
	I0729 17:12:58.638270   10641 main.go:141] libmachine: STDOUT: 
	I0729 17:12:58.638287   10641 main.go:141] libmachine: STDERR: 
	I0729 17:12:58.638330   10641 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/disk.qcow2 +20000M
	I0729 17:12:58.646381   10641 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:12:58.646395   10641 main.go:141] libmachine: STDERR: 
	I0729 17:12:58.646407   10641 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/disk.qcow2
	I0729 17:12:58.646410   10641 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:12:58.646425   10641 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:12:58.646472   10641 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:7e:5e:a1:c9:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/disk.qcow2
	I0729 17:12:58.648160   10641 main.go:141] libmachine: STDOUT: 
	I0729 17:12:58.648176   10641 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:12:58.648198   10641 client.go:171] duration metric: took 308.154375ms to LocalClient.Create
	I0729 17:13:00.650467   10641 start.go:128] duration metric: took 2.337329709s to createHost
	I0729 17:13:00.650562   10641 start.go:83] releasing machines lock for "kubenet-561000", held for 2.337501458s
	W0729 17:13:00.650671   10641 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:00.661956   10641 out.go:177] * Deleting "kubenet-561000" in qemu2 ...
	W0729 17:13:00.694208   10641 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:00.694247   10641 start.go:729] Will try again in 5 seconds ...
	I0729 17:13:05.696460   10641 start.go:360] acquireMachinesLock for kubenet-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:13:05.696979   10641 start.go:364] duration metric: took 369.208µs to acquireMachinesLock for "kubenet-561000"
	I0729 17:13:05.697133   10641 start.go:93] Provisioning new machine with config: &{Name:kubenet-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:13:05.697412   10641 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:13:05.712831   10641 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:13:05.763516   10641 start.go:159] libmachine.API.Create for "kubenet-561000" (driver="qemu2")
	I0729 17:13:05.763557   10641 client.go:168] LocalClient.Create starting
	I0729 17:13:05.763682   10641 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:13:05.763743   10641 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:05.763757   10641 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:05.763833   10641 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:13:05.763883   10641 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:05.763900   10641 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:05.764452   10641 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:13:05.925546   10641 main.go:141] libmachine: Creating SSH key...
	I0729 17:13:06.034778   10641 main.go:141] libmachine: Creating Disk image...
	I0729 17:13:06.034783   10641 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:13:06.035007   10641 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/disk.qcow2
	I0729 17:13:06.044116   10641 main.go:141] libmachine: STDOUT: 
	I0729 17:13:06.044132   10641 main.go:141] libmachine: STDERR: 
	I0729 17:13:06.044181   10641 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/disk.qcow2 +20000M
	I0729 17:13:06.051993   10641 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:13:06.052015   10641 main.go:141] libmachine: STDERR: 
	I0729 17:13:06.052025   10641 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/disk.qcow2
	I0729 17:13:06.052028   10641 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:13:06.052039   10641 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:13:06.052071   10641 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:2e:07:17:df:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/kubenet-561000/disk.qcow2
	I0729 17:13:06.053667   10641 main.go:141] libmachine: STDOUT: 
	I0729 17:13:06.053682   10641 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:13:06.053694   10641 client.go:171] duration metric: took 290.133ms to LocalClient.Create
	I0729 17:13:08.055867   10641 start.go:128] duration metric: took 2.358428792s to createHost
	I0729 17:13:08.055967   10641 start.go:83] releasing machines lock for "kubenet-561000", held for 2.358936834s
	W0729 17:13:08.056344   10641 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:08.064921   10641 out.go:177] 
	W0729 17:13:08.071942   10641 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:13:08.071965   10641 out.go:239] * 
	* 
	W0729 17:13:08.074744   10641 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:13:08.084945   10641 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.831220125s)

                                                
                                                
-- stdout --
	* [custom-flannel-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-561000" primary control-plane node in "custom-flannel-561000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-561000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:13:10.232294   10755 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:13:10.232430   10755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:13:10.232436   10755 out.go:304] Setting ErrFile to fd 2...
	I0729 17:13:10.232438   10755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:13:10.232565   10755 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:13:10.233600   10755 out.go:298] Setting JSON to false
	I0729 17:13:10.249610   10755 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6157,"bootTime":1722292233,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:13:10.249684   10755 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:13:10.255139   10755 out.go:177] * [custom-flannel-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:13:10.263127   10755 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:13:10.263192   10755 notify.go:220] Checking for updates...
	I0729 17:13:10.271024   10755 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:13:10.274074   10755 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:13:10.277102   10755 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:13:10.280064   10755 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:13:10.283041   10755 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:13:10.286347   10755 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:13:10.286416   10755 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:13:10.286473   10755 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:13:10.290004   10755 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:13:10.296066   10755 start.go:297] selected driver: qemu2
	I0729 17:13:10.296074   10755 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:13:10.296081   10755 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:13:10.298448   10755 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:13:10.302172   10755 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:13:10.305171   10755 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:13:10.305226   10755 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0729 17:13:10.305243   10755 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0729 17:13:10.305275   10755 start.go:340] cluster config:
	{Name:custom-flannel-561000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:13:10.308926   10755 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:13:10.317070   10755 out.go:177] * Starting "custom-flannel-561000" primary control-plane node in "custom-flannel-561000" cluster
	I0729 17:13:10.320917   10755 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:13:10.320933   10755 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:13:10.320948   10755 cache.go:56] Caching tarball of preloaded images
	I0729 17:13:10.321014   10755 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:13:10.321020   10755 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:13:10.321098   10755 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/custom-flannel-561000/config.json ...
	I0729 17:13:10.321110   10755 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/custom-flannel-561000/config.json: {Name:mk5c5f26e25b540e8c1dd218892b1b955abc80ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:13:10.321330   10755 start.go:360] acquireMachinesLock for custom-flannel-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:13:10.321367   10755 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "custom-flannel-561000"
	I0729 17:13:10.321379   10755 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:13:10.321408   10755 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:13:10.329078   10755 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:13:10.346959   10755 start.go:159] libmachine.API.Create for "custom-flannel-561000" (driver="qemu2")
	I0729 17:13:10.346993   10755 client.go:168] LocalClient.Create starting
	I0729 17:13:10.347063   10755 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:13:10.347095   10755 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:10.347104   10755 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:10.347143   10755 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:13:10.347167   10755 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:10.347189   10755 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:10.347626   10755 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:13:10.500662   10755 main.go:141] libmachine: Creating SSH key...
	I0729 17:13:10.624549   10755 main.go:141] libmachine: Creating Disk image...
	I0729 17:13:10.624559   10755 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:13:10.624789   10755 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/disk.qcow2
	I0729 17:13:10.633707   10755 main.go:141] libmachine: STDOUT: 
	I0729 17:13:10.633723   10755 main.go:141] libmachine: STDERR: 
	I0729 17:13:10.633766   10755 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/disk.qcow2 +20000M
	I0729 17:13:10.641485   10755 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:13:10.641498   10755 main.go:141] libmachine: STDERR: 
	I0729 17:13:10.641526   10755 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/disk.qcow2
	I0729 17:13:10.641532   10755 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:13:10.641545   10755 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:13:10.641574   10755 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:9d:b4:b8:8b:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/disk.qcow2
	I0729 17:13:10.643124   10755 main.go:141] libmachine: STDOUT: 
	I0729 17:13:10.643140   10755 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:13:10.643157   10755 client.go:171] duration metric: took 296.159167ms to LocalClient.Create
	I0729 17:13:12.645367   10755 start.go:128] duration metric: took 2.323928209s to createHost
	I0729 17:13:12.645492   10755 start.go:83] releasing machines lock for "custom-flannel-561000", held for 2.324066583s
	W0729 17:13:12.645565   10755 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:12.659697   10755 out.go:177] * Deleting "custom-flannel-561000" in qemu2 ...
	W0729 17:13:12.686106   10755 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:12.686136   10755 start.go:729] Will try again in 5 seconds ...
	I0729 17:13:17.688383   10755 start.go:360] acquireMachinesLock for custom-flannel-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:13:17.688963   10755 start.go:364] duration metric: took 417.584µs to acquireMachinesLock for "custom-flannel-561000"
	I0729 17:13:17.689088   10755 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:13:17.689318   10755 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:13:17.699025   10755 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:13:17.750259   10755 start.go:159] libmachine.API.Create for "custom-flannel-561000" (driver="qemu2")
	I0729 17:13:17.750310   10755 client.go:168] LocalClient.Create starting
	I0729 17:13:17.750421   10755 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:13:17.750492   10755 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:17.750510   10755 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:17.750570   10755 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:13:17.750613   10755 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:17.750623   10755 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:17.751178   10755 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:13:17.913268   10755 main.go:141] libmachine: Creating SSH key...
	I0729 17:13:17.976349   10755 main.go:141] libmachine: Creating Disk image...
	I0729 17:13:17.976354   10755 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:13:17.976521   10755 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/disk.qcow2
	I0729 17:13:17.985271   10755 main.go:141] libmachine: STDOUT: 
	I0729 17:13:17.985289   10755 main.go:141] libmachine: STDERR: 
	I0729 17:13:17.985334   10755 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/disk.qcow2 +20000M
	I0729 17:13:17.993121   10755 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:13:17.993140   10755 main.go:141] libmachine: STDERR: 
	I0729 17:13:17.993152   10755 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/disk.qcow2
	I0729 17:13:17.993157   10755 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:13:17.993172   10755 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:13:17.993202   10755 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:9c:4f:23:46:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/custom-flannel-561000/disk.qcow2
	I0729 17:13:17.994731   10755 main.go:141] libmachine: STDOUT: 
	I0729 17:13:17.994748   10755 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:13:17.994762   10755 client.go:171] duration metric: took 244.447375ms to LocalClient.Create
	I0729 17:13:19.996935   10755 start.go:128] duration metric: took 2.307593417s to createHost
	I0729 17:13:19.997079   10755 start.go:83] releasing machines lock for "custom-flannel-561000", held for 2.308053125s
	W0729 17:13:19.997399   10755 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:20.005998   10755 out.go:177] 
	W0729 17:13:20.010048   10755 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:13:20.010071   10755 out.go:239] * 
	* 
	W0729 17:13:20.012700   10755 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:13:20.021001   10755 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.815519084s)

                                                
                                                
-- stdout --
	* [calico-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-561000" primary control-plane node in "calico-561000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-561000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:13:22.360180   10879 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:13:22.360313   10879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:13:22.360316   10879 out.go:304] Setting ErrFile to fd 2...
	I0729 17:13:22.360319   10879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:13:22.360457   10879 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:13:22.361475   10879 out.go:298] Setting JSON to false
	I0729 17:13:22.377522   10879 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6169,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:13:22.377594   10879 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:13:22.384072   10879 out.go:177] * [calico-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:13:22.391883   10879 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:13:22.391925   10879 notify.go:220] Checking for updates...
	I0729 17:13:22.400039   10879 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:13:22.405951   10879 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:13:22.408974   10879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:13:22.412005   10879 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:13:22.414904   10879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:13:22.418250   10879 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:13:22.418324   10879 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:13:22.418375   10879 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:13:22.421030   10879 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:13:22.427991   10879 start.go:297] selected driver: qemu2
	I0729 17:13:22.427997   10879 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:13:22.428003   10879 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:13:22.430264   10879 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:13:22.432983   10879 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:13:22.434630   10879 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:13:22.434647   10879 cni.go:84] Creating CNI manager for "calico"
	I0729 17:13:22.434651   10879 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0729 17:13:22.434679   10879 start.go:340] cluster config:
	{Name:calico-561000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:13:22.438483   10879 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:13:22.447045   10879 out.go:177] * Starting "calico-561000" primary control-plane node in "calico-561000" cluster
	I0729 17:13:22.450991   10879 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:13:22.451010   10879 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:13:22.451023   10879 cache.go:56] Caching tarball of preloaded images
	I0729 17:13:22.451099   10879 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:13:22.451111   10879 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:13:22.451177   10879 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/calico-561000/config.json ...
	I0729 17:13:22.451192   10879 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/calico-561000/config.json: {Name:mk7afa089da50dd0f17374ef7940357c96eaa7e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:13:22.451588   10879 start.go:360] acquireMachinesLock for calico-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:13:22.451625   10879 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "calico-561000"
	I0729 17:13:22.451636   10879 start.go:93] Provisioning new machine with config: &{Name:calico-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:13:22.451662   10879 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:13:22.458980   10879 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:13:22.477547   10879 start.go:159] libmachine.API.Create for "calico-561000" (driver="qemu2")
	I0729 17:13:22.477571   10879 client.go:168] LocalClient.Create starting
	I0729 17:13:22.477641   10879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:13:22.477671   10879 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:22.477687   10879 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:22.477722   10879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:13:22.477746   10879 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:22.477755   10879 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:22.478132   10879 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:13:22.627397   10879 main.go:141] libmachine: Creating SSH key...
	I0729 17:13:22.674060   10879 main.go:141] libmachine: Creating Disk image...
	I0729 17:13:22.674065   10879 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:13:22.674285   10879 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/disk.qcow2
	I0729 17:13:22.683399   10879 main.go:141] libmachine: STDOUT: 
	I0729 17:13:22.683418   10879 main.go:141] libmachine: STDERR: 
	I0729 17:13:22.683468   10879 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/disk.qcow2 +20000M
	I0729 17:13:22.691197   10879 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:13:22.691211   10879 main.go:141] libmachine: STDERR: 
	I0729 17:13:22.691229   10879 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/disk.qcow2
	I0729 17:13:22.691233   10879 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:13:22.691247   10879 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:13:22.691279   10879 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:34:8d:84:9a:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/disk.qcow2
	I0729 17:13:22.692912   10879 main.go:141] libmachine: STDOUT: 
	I0729 17:13:22.692936   10879 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:13:22.692956   10879 client.go:171] duration metric: took 215.380875ms to LocalClient.Create
	I0729 17:13:24.695126   10879 start.go:128] duration metric: took 2.243443791s to createHost
	I0729 17:13:24.695185   10879 start.go:83] releasing machines lock for "calico-561000", held for 2.243551167s
	W0729 17:13:24.695301   10879 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:24.711524   10879 out.go:177] * Deleting "calico-561000" in qemu2 ...
	W0729 17:13:24.741173   10879 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:24.741200   10879 start.go:729] Will try again in 5 seconds ...
	I0729 17:13:29.743451   10879 start.go:360] acquireMachinesLock for calico-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:13:29.744016   10879 start.go:364] duration metric: took 414.75µs to acquireMachinesLock for "calico-561000"
	I0729 17:13:29.744161   10879 start.go:93] Provisioning new machine with config: &{Name:calico-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:13:29.744482   10879 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:13:29.753007   10879 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:13:29.806581   10879 start.go:159] libmachine.API.Create for "calico-561000" (driver="qemu2")
	I0729 17:13:29.806634   10879 client.go:168] LocalClient.Create starting
	I0729 17:13:29.806760   10879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:13:29.806853   10879 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:29.806869   10879 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:29.806908   10879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:13:29.806952   10879 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:29.806968   10879 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:29.807491   10879 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:13:29.969002   10879 main.go:141] libmachine: Creating SSH key...
	I0729 17:13:30.085817   10879 main.go:141] libmachine: Creating Disk image...
	I0729 17:13:30.085823   10879 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:13:30.086043   10879 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/disk.qcow2
	I0729 17:13:30.095136   10879 main.go:141] libmachine: STDOUT: 
	I0729 17:13:30.095155   10879 main.go:141] libmachine: STDERR: 
	I0729 17:13:30.095214   10879 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/disk.qcow2 +20000M
	I0729 17:13:30.103063   10879 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:13:30.103076   10879 main.go:141] libmachine: STDERR: 
	I0729 17:13:30.103086   10879 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/disk.qcow2
	I0729 17:13:30.103090   10879 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:13:30.103102   10879 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:13:30.103129   10879 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:7d:7c:f9:5a:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/calico-561000/disk.qcow2
	I0729 17:13:30.104714   10879 main.go:141] libmachine: STDOUT: 
	I0729 17:13:30.104729   10879 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:13:30.104741   10879 client.go:171] duration metric: took 298.100959ms to LocalClient.Create
	I0729 17:13:32.106915   10879 start.go:128] duration metric: took 2.362396541s to createHost
	I0729 17:13:32.107031   10879 start.go:83] releasing machines lock for "calico-561000", held for 2.362966875s
	W0729 17:13:32.107453   10879 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:32.117011   10879 out.go:177] 
	W0729 17:13:32.124051   10879 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:13:32.124104   10879 out.go:239] * 
	* 
	W0729 17:13:32.126675   10879 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:13:32.134083   10879 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-561000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.822723791s)

                                                
                                                
-- stdout --
	* [false-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-561000" primary control-plane node in "false-561000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-561000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:13:34.570881   11000 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:13:34.571018   11000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:13:34.571024   11000 out.go:304] Setting ErrFile to fd 2...
	I0729 17:13:34.571027   11000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:13:34.571164   11000 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:13:34.572156   11000 out.go:298] Setting JSON to false
	I0729 17:13:34.588399   11000 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6181,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:13:34.588472   11000 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:13:34.594161   11000 out.go:177] * [false-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:13:34.602140   11000 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:13:34.602180   11000 notify.go:220] Checking for updates...
	I0729 17:13:34.610105   11000 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:13:34.613106   11000 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:13:34.616141   11000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:13:34.619147   11000 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:13:34.622170   11000 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:13:34.625380   11000 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:13:34.625455   11000 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:13:34.625509   11000 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:13:34.629108   11000 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:13:34.636173   11000 start.go:297] selected driver: qemu2
	I0729 17:13:34.636179   11000 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:13:34.636186   11000 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:13:34.638573   11000 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:13:34.642100   11000 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:13:34.646190   11000 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:13:34.646221   11000 cni.go:84] Creating CNI manager for "false"
	I0729 17:13:34.646242   11000 start.go:340] cluster config:
	{Name:false-561000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:13:34.650039   11000 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:13:34.657129   11000 out.go:177] * Starting "false-561000" primary control-plane node in "false-561000" cluster
	I0729 17:13:34.661153   11000 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:13:34.661170   11000 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:13:34.661183   11000 cache.go:56] Caching tarball of preloaded images
	I0729 17:13:34.661254   11000 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:13:34.661265   11000 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:13:34.661336   11000 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/false-561000/config.json ...
	I0729 17:13:34.661347   11000 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/false-561000/config.json: {Name:mkc45fc9af70283082070d9af3e0dbd59221b7ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:13:34.661575   11000 start.go:360] acquireMachinesLock for false-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:13:34.661611   11000 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "false-561000"
	I0729 17:13:34.661622   11000 start.go:93] Provisioning new machine with config: &{Name:false-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:13:34.661650   11000 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:13:34.670125   11000 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:13:34.688499   11000 start.go:159] libmachine.API.Create for "false-561000" (driver="qemu2")
	I0729 17:13:34.688530   11000 client.go:168] LocalClient.Create starting
	I0729 17:13:34.688588   11000 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:13:34.688620   11000 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:34.688630   11000 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:34.688666   11000 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:13:34.688691   11000 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:34.688700   11000 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:34.689118   11000 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:13:34.838504   11000 main.go:141] libmachine: Creating SSH key...
	I0729 17:13:34.920007   11000 main.go:141] libmachine: Creating Disk image...
	I0729 17:13:34.920012   11000 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:13:34.920228   11000 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/disk.qcow2
	I0729 17:13:34.929394   11000 main.go:141] libmachine: STDOUT: 
	I0729 17:13:34.929416   11000 main.go:141] libmachine: STDERR: 
	I0729 17:13:34.929462   11000 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/disk.qcow2 +20000M
	I0729 17:13:34.937334   11000 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:13:34.937346   11000 main.go:141] libmachine: STDERR: 
	I0729 17:13:34.937366   11000 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/disk.qcow2
	I0729 17:13:34.937370   11000 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:13:34.937380   11000 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:13:34.937404   11000 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:bb:64:01:e2:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/disk.qcow2
	I0729 17:13:34.938944   11000 main.go:141] libmachine: STDOUT: 
	I0729 17:13:34.938957   11000 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:13:34.938976   11000 client.go:171] duration metric: took 250.442208ms to LocalClient.Create
	I0729 17:13:36.941153   11000 start.go:128] duration metric: took 2.279482833s to createHost
	I0729 17:13:36.941222   11000 start.go:83] releasing machines lock for "false-561000", held for 2.279602834s
	W0729 17:13:36.941289   11000 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:36.953663   11000 out.go:177] * Deleting "false-561000" in qemu2 ...
	W0729 17:13:36.985237   11000 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:36.985268   11000 start.go:729] Will try again in 5 seconds ...
	I0729 17:13:41.987615   11000 start.go:360] acquireMachinesLock for false-561000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:13:41.988089   11000 start.go:364] duration metric: took 356.667µs to acquireMachinesLock for "false-561000"
	I0729 17:13:41.988229   11000 start.go:93] Provisioning new machine with config: &{Name:false-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:13:41.988582   11000 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:13:41.996192   11000 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 17:13:42.047792   11000 start.go:159] libmachine.API.Create for "false-561000" (driver="qemu2")
	I0729 17:13:42.047845   11000 client.go:168] LocalClient.Create starting
	I0729 17:13:42.047960   11000 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:13:42.048027   11000 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:42.048041   11000 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:42.048107   11000 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:13:42.048151   11000 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:42.048164   11000 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:42.048684   11000 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:13:42.207822   11000 main.go:141] libmachine: Creating SSH key...
	I0729 17:13:42.298943   11000 main.go:141] libmachine: Creating Disk image...
	I0729 17:13:42.298949   11000 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:13:42.299182   11000 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/disk.qcow2
	I0729 17:13:42.308842   11000 main.go:141] libmachine: STDOUT: 
	I0729 17:13:42.308859   11000 main.go:141] libmachine: STDERR: 
	I0729 17:13:42.308903   11000 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/disk.qcow2 +20000M
	I0729 17:13:42.316723   11000 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:13:42.316741   11000 main.go:141] libmachine: STDERR: 
	I0729 17:13:42.316751   11000 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/disk.qcow2
	I0729 17:13:42.316755   11000 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:13:42.316765   11000 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:13:42.316787   11000 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:79:be:d5:83:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/false-561000/disk.qcow2
	I0729 17:13:42.318422   11000 main.go:141] libmachine: STDOUT: 
	I0729 17:13:42.318439   11000 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:13:42.318451   11000 client.go:171] duration metric: took 270.601542ms to LocalClient.Create
	I0729 17:13:44.320623   11000 start.go:128] duration metric: took 2.33200125s to createHost
	I0729 17:13:44.320732   11000 start.go:83] releasing machines lock for "false-561000", held for 2.332583s
	W0729 17:13:44.321180   11000 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:44.336857   11000 out.go:177] 
	W0729 17:13:44.340061   11000 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:13:44.340091   11000 out.go:239] * 
	* 
	W0729 17:13:44.342807   11000 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:13:44.350860   11000 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-813000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-813000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.735628334s)

                                                
                                                
-- stdout --
	* [old-k8s-version-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-813000" primary control-plane node in "old-k8s-version-813000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-813000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:13:46.561511   11114 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:13:46.561641   11114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:13:46.561644   11114 out.go:304] Setting ErrFile to fd 2...
	I0729 17:13:46.561647   11114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:13:46.561776   11114 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:13:46.562979   11114 out.go:298] Setting JSON to false
	I0729 17:13:46.579742   11114 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6193,"bootTime":1722292233,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:13:46.579817   11114 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:13:46.584921   11114 out.go:177] * [old-k8s-version-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:13:46.590268   11114 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:13:46.590311   11114 notify.go:220] Checking for updates...
	I0729 17:13:46.597955   11114 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:13:46.604897   11114 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:13:46.608973   11114 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:13:46.611867   11114 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:13:46.615950   11114 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:13:46.619246   11114 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:13:46.619311   11114 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:13:46.619379   11114 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:13:46.621931   11114 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:13:46.628955   11114 start.go:297] selected driver: qemu2
	I0729 17:13:46.628962   11114 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:13:46.628967   11114 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:13:46.631326   11114 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:13:46.634929   11114 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:13:46.637975   11114 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:13:46.638005   11114 cni.go:84] Creating CNI manager for ""
	I0729 17:13:46.638012   11114 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 17:13:46.638036   11114 start.go:340] cluster config:
	{Name:old-k8s-version-813000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:13:46.641683   11114 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:13:46.649915   11114 out.go:177] * Starting "old-k8s-version-813000" primary control-plane node in "old-k8s-version-813000" cluster
	I0729 17:13:46.652946   11114 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 17:13:46.652962   11114 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 17:13:46.652975   11114 cache.go:56] Caching tarball of preloaded images
	I0729 17:13:46.653037   11114 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:13:46.653044   11114 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 17:13:46.653108   11114 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/old-k8s-version-813000/config.json ...
	I0729 17:13:46.653119   11114 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/old-k8s-version-813000/config.json: {Name:mk27aea2c5a4d4938fb8874bff05676046269215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:13:46.653431   11114 start.go:360] acquireMachinesLock for old-k8s-version-813000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:13:46.653468   11114 start.go:364] duration metric: took 30.792µs to acquireMachinesLock for "old-k8s-version-813000"
	I0729 17:13:46.653496   11114 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:13:46.653524   11114 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:13:46.657911   11114 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:13:46.675163   11114 start.go:159] libmachine.API.Create for "old-k8s-version-813000" (driver="qemu2")
	I0729 17:13:46.675188   11114 client.go:168] LocalClient.Create starting
	I0729 17:13:46.675260   11114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:13:46.675289   11114 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:46.675299   11114 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:46.675333   11114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:13:46.675357   11114 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:46.675363   11114 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:46.675739   11114 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:13:46.825482   11114 main.go:141] libmachine: Creating SSH key...
	I0729 17:13:46.873725   11114 main.go:141] libmachine: Creating Disk image...
	I0729 17:13:46.873731   11114 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:13:46.873935   11114 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2
	I0729 17:13:46.883005   11114 main.go:141] libmachine: STDOUT: 
	I0729 17:13:46.883023   11114 main.go:141] libmachine: STDERR: 
	I0729 17:13:46.883088   11114 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2 +20000M
	I0729 17:13:46.890871   11114 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:13:46.890882   11114 main.go:141] libmachine: STDERR: 
	I0729 17:13:46.890895   11114 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2
	I0729 17:13:46.890899   11114 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:13:46.890910   11114 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:13:46.890937   11114 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:4f:38:93:ff:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2
	I0729 17:13:46.892531   11114 main.go:141] libmachine: STDOUT: 
	I0729 17:13:46.892546   11114 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:13:46.892561   11114 client.go:171] duration metric: took 217.370083ms to LocalClient.Create
	I0729 17:13:48.894764   11114 start.go:128] duration metric: took 2.241211417s to createHost
	I0729 17:13:48.894822   11114 start.go:83] releasing machines lock for "old-k8s-version-813000", held for 2.241344583s
	W0729 17:13:48.894905   11114 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:48.909071   11114 out.go:177] * Deleting "old-k8s-version-813000" in qemu2 ...
	W0729 17:13:48.934937   11114 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:48.934965   11114 start.go:729] Will try again in 5 seconds ...
	I0729 17:13:53.937196   11114 start.go:360] acquireMachinesLock for old-k8s-version-813000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:13:53.937754   11114 start.go:364] duration metric: took 450.541µs to acquireMachinesLock for "old-k8s-version-813000"
	I0729 17:13:53.937887   11114 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:13:53.938161   11114 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:13:53.949733   11114 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:13:54.000517   11114 start.go:159] libmachine.API.Create for "old-k8s-version-813000" (driver="qemu2")
	I0729 17:13:54.000567   11114 client.go:168] LocalClient.Create starting
	I0729 17:13:54.000676   11114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:13:54.000735   11114 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:54.000751   11114 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:54.000819   11114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:13:54.000870   11114 main.go:141] libmachine: Decoding PEM data...
	I0729 17:13:54.000889   11114 main.go:141] libmachine: Parsing certificate...
	I0729 17:13:54.001413   11114 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:13:54.162937   11114 main.go:141] libmachine: Creating SSH key...
	I0729 17:13:54.203847   11114 main.go:141] libmachine: Creating Disk image...
	I0729 17:13:54.203853   11114 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:13:54.204085   11114 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2
	I0729 17:13:54.213373   11114 main.go:141] libmachine: STDOUT: 
	I0729 17:13:54.213406   11114 main.go:141] libmachine: STDERR: 
	I0729 17:13:54.213456   11114 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2 +20000M
	I0729 17:13:54.221167   11114 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:13:54.221187   11114 main.go:141] libmachine: STDERR: 
	I0729 17:13:54.221200   11114 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2
	I0729 17:13:54.221204   11114 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:13:54.221216   11114 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:13:54.221241   11114 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:55:08:31:85:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2
	I0729 17:13:54.222835   11114 main.go:141] libmachine: STDOUT: 
	I0729 17:13:54.222853   11114 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:13:54.222865   11114 client.go:171] duration metric: took 222.292625ms to LocalClient.Create
	I0729 17:13:56.225043   11114 start.go:128] duration metric: took 2.286854875s to createHost
	I0729 17:13:56.225102   11114 start.go:83] releasing machines lock for "old-k8s-version-813000", held for 2.28732s
	W0729 17:13:56.225596   11114 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-813000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-813000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:13:56.234235   11114 out.go:177] 
	W0729 17:13:56.241425   11114 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:13:56.241453   11114 out.go:239] * 
	* 
	W0729 17:13:56.244265   11114 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:13:56.254207   11114 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-813000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000: exit status 7 (69.08025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-813000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-813000 create -f testdata/busybox.yaml: exit status 1 (29.304084ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-813000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-813000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000: exit status 7 (30.356041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-813000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000: exit status 7 (29.59625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-813000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-813000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-813000 describe deploy/metrics-server -n kube-system: exit status 1 (27.430791ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-813000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-813000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000: exit status 7 (30.093667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-813000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-813000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.195207459s)

                                                
                                                
-- stdout --
	* [old-k8s-version-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-813000" primary control-plane node in "old-k8s-version-813000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:14:00.117157   11166 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:14:00.117284   11166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:00.117287   11166 out.go:304] Setting ErrFile to fd 2...
	I0729 17:14:00.117290   11166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:00.117437   11166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:14:00.118441   11166 out.go:298] Setting JSON to false
	I0729 17:14:00.134660   11166 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6207,"bootTime":1722292233,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:14:00.134726   11166 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:14:00.139861   11166 out.go:177] * [old-k8s-version-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:14:00.146816   11166 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:14:00.146874   11166 notify.go:220] Checking for updates...
	I0729 17:14:00.153636   11166 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:14:00.156734   11166 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:14:00.159797   11166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:14:00.162817   11166 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:14:00.166835   11166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:14:00.170251   11166 config.go:182] Loaded profile config "old-k8s-version-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 17:14:00.173761   11166 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 17:14:00.176807   11166 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:14:00.179807   11166 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 17:14:00.186772   11166 start.go:297] selected driver: qemu2
	I0729 17:14:00.186781   11166 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:14:00.186835   11166 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:14:00.189275   11166 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:14:00.189323   11166 cni.go:84] Creating CNI manager for ""
	I0729 17:14:00.189329   11166 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 17:14:00.189357   11166 start.go:340] cluster config:
	{Name:old-k8s-version-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:14:00.193075   11166 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:00.201769   11166 out.go:177] * Starting "old-k8s-version-813000" primary control-plane node in "old-k8s-version-813000" cluster
	I0729 17:14:00.205813   11166 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 17:14:00.205832   11166 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 17:14:00.205846   11166 cache.go:56] Caching tarball of preloaded images
	I0729 17:14:00.205922   11166 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:14:00.205936   11166 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 17:14:00.206005   11166 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/old-k8s-version-813000/config.json ...
	I0729 17:14:00.206534   11166 start.go:360] acquireMachinesLock for old-k8s-version-813000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:00.206568   11166 start.go:364] duration metric: took 27.541µs to acquireMachinesLock for "old-k8s-version-813000"
	I0729 17:14:00.206578   11166 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:14:00.206583   11166 fix.go:54] fixHost starting: 
	I0729 17:14:00.206711   11166 fix.go:112] recreateIfNeeded on old-k8s-version-813000: state=Stopped err=<nil>
	W0729 17:14:00.206720   11166 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:14:00.210786   11166 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-813000" ...
	I0729 17:14:00.218768   11166 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:00.218804   11166 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:55:08:31:85:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2
	I0729 17:14:00.220963   11166 main.go:141] libmachine: STDOUT: 
	I0729 17:14:00.220985   11166 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:00.221020   11166 fix.go:56] duration metric: took 14.437334ms for fixHost
	I0729 17:14:00.221024   11166 start.go:83] releasing machines lock for "old-k8s-version-813000", held for 14.451458ms
	W0729 17:14:00.221034   11166 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:14:00.221065   11166 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:00.221070   11166 start.go:729] Will try again in 5 seconds ...
	I0729 17:14:05.223270   11166 start.go:360] acquireMachinesLock for old-k8s-version-813000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:05.223696   11166 start.go:364] duration metric: took 309.166µs to acquireMachinesLock for "old-k8s-version-813000"
	I0729 17:14:05.223865   11166 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:14:05.223883   11166 fix.go:54] fixHost starting: 
	I0729 17:14:05.224653   11166 fix.go:112] recreateIfNeeded on old-k8s-version-813000: state=Stopped err=<nil>
	W0729 17:14:05.224679   11166 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:14:05.235065   11166 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-813000" ...
	I0729 17:14:05.238075   11166 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:05.238348   11166 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:55:08:31:85:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/old-k8s-version-813000/disk.qcow2
	I0729 17:14:05.248406   11166 main.go:141] libmachine: STDOUT: 
	I0729 17:14:05.248469   11166 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:05.248553   11166 fix.go:56] duration metric: took 24.671375ms for fixHost
	I0729 17:14:05.248571   11166 start.go:83] releasing machines lock for "old-k8s-version-813000", held for 24.842541ms
	W0729 17:14:05.248733   11166 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:05.257072   11166 out.go:177] 
	W0729 17:14:05.261149   11166 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:14:05.261179   11166 out.go:239] * 
	* 
	W0729 17:14:05.263500   11166 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:14:05.271093   11166 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-813000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000: exit status 7 (68.894584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-813000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000: exit status 7 (31.7805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-813000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-813000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-813000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.54525ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-813000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-813000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000: exit status 7 (30.025084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-813000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000: exit status 7 (30.042041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-813000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-813000 --alsologtostderr -v=1: exit status 83 (42.448708ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-813000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-813000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:14:05.543264   11185 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:14:05.543973   11185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:05.543976   11185 out.go:304] Setting ErrFile to fd 2...
	I0729 17:14:05.543979   11185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:05.544138   11185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:14:05.544345   11185 out.go:298] Setting JSON to false
	I0729 17:14:05.544354   11185 mustload.go:65] Loading cluster: old-k8s-version-813000
	I0729 17:14:05.544542   11185 config.go:182] Loaded profile config "old-k8s-version-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 17:14:05.548980   11185 out.go:177] * The control-plane node old-k8s-version-813000 host is not running: state=Stopped
	I0729 17:14:05.552932   11185 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-813000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-813000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000: exit status 7 (30.251125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-813000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000: exit status 7 (29.391542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-906000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-906000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.759407958s)

                                                
                                                
-- stdout --
	* [no-preload-906000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-906000" primary control-plane node in "no-preload-906000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-906000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:14:05.860328   11203 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:14:05.860458   11203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:05.860461   11203 out.go:304] Setting ErrFile to fd 2...
	I0729 17:14:05.860463   11203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:05.860747   11203 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:14:05.862030   11203 out.go:298] Setting JSON to false
	I0729 17:14:05.878453   11203 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6212,"bootTime":1722292233,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:14:05.878518   11203 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:14:05.883826   11203 out.go:177] * [no-preload-906000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:14:05.891042   11203 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:14:05.891108   11203 notify.go:220] Checking for updates...
	I0729 17:14:05.898925   11203 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:14:05.901963   11203 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:14:05.904887   11203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:14:05.907946   11203 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:14:05.910973   11203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:14:05.914184   11203 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:14:05.914243   11203 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:14:05.914306   11203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:14:05.918931   11203 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:14:05.924895   11203 start.go:297] selected driver: qemu2
	I0729 17:14:05.924902   11203 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:14:05.924910   11203 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:14:05.927382   11203 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:14:05.929973   11203 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:14:05.933022   11203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:14:05.933051   11203 cni.go:84] Creating CNI manager for ""
	I0729 17:14:05.933059   11203 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:14:05.933066   11203 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:14:05.933095   11203 start.go:340] cluster config:
	{Name:no-preload-906000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:14:05.936958   11203 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:05.944946   11203 out.go:177] * Starting "no-preload-906000" primary control-plane node in "no-preload-906000" cluster
	I0729 17:14:05.948979   11203 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 17:14:05.949071   11203 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/no-preload-906000/config.json ...
	I0729 17:14:05.949097   11203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/no-preload-906000/config.json: {Name:mkd4909749999d954b7cdbd4df94e9a13eff4af1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:14:05.949103   11203 cache.go:107] acquiring lock: {Name:mke00dafbbc7efe9c124c54d8e3aaae3232df4f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:05.949112   11203 cache.go:107] acquiring lock: {Name:mkfe2d7a8e99a82c34975c2be4321c048fed7415 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:05.949122   11203 cache.go:107] acquiring lock: {Name:mk311510ef6c7efe09bb41fd0c685c63b93f5571 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:05.949180   11203 cache.go:115] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 17:14:05.949188   11203 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 86.792µs
	I0729 17:14:05.949194   11203 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 17:14:05.949221   11203 cache.go:107] acquiring lock: {Name:mkc717d42dad0841ba4a47e4fc7f8c0bbdaa3ae7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:05.949338   11203 cache.go:107] acquiring lock: {Name:mk9240b7175255b32574e550cfc6f12b59ea4ef2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:05.949351   11203 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 17:14:05.949370   11203 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 17:14:05.949387   11203 cache.go:107] acquiring lock: {Name:mkb5d4774459854d753ea4e21612ec591b936345 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:05.949424   11203 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 17:14:05.949411   11203 cache.go:107] acquiring lock: {Name:mk44cf5f59662a80debcb69096aaee3fa343bf51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:05.949401   11203 cache.go:107] acquiring lock: {Name:mk6ea0d3eba0d0b49170d68fefc4d776b7b68abd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:05.949423   11203 start.go:360] acquireMachinesLock for no-preload-906000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:05.949569   11203 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 17:14:05.949595   11203 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 17:14:05.949608   11203 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 17:14:05.949653   11203 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 17:14:05.949675   11203 start.go:364] duration metric: took 202.291µs to acquireMachinesLock for "no-preload-906000"
	I0729 17:14:05.949687   11203 start.go:93] Provisioning new machine with config: &{Name:no-preload-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:14:05.949719   11203 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:14:05.956884   11203 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:14:05.959902   11203 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 17:14:05.959974   11203 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 17:14:05.960530   11203 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 17:14:05.960852   11203 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 17:14:05.960909   11203 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 17:14:05.960960   11203 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 17:14:05.961006   11203 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 17:14:05.975696   11203 start.go:159] libmachine.API.Create for "no-preload-906000" (driver="qemu2")
	I0729 17:14:05.975729   11203 client.go:168] LocalClient.Create starting
	I0729 17:14:05.975825   11203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:14:05.975857   11203 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:05.975868   11203 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:05.975914   11203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:14:05.975938   11203 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:05.975948   11203 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:05.976355   11203 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:14:06.129492   11203 main.go:141] libmachine: Creating SSH key...
	I0729 17:14:06.185514   11203 main.go:141] libmachine: Creating Disk image...
	I0729 17:14:06.185559   11203 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:14:06.185813   11203 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2
	I0729 17:14:06.195615   11203 main.go:141] libmachine: STDOUT: 
	I0729 17:14:06.195633   11203 main.go:141] libmachine: STDERR: 
	I0729 17:14:06.195686   11203 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2 +20000M
	I0729 17:14:06.204555   11203 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:14:06.204573   11203 main.go:141] libmachine: STDERR: 
	I0729 17:14:06.204596   11203 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2
	I0729 17:14:06.204600   11203 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:14:06.204611   11203 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:06.204642   11203 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:3f:83:d9:f4:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2
	I0729 17:14:06.206505   11203 main.go:141] libmachine: STDOUT: 
	I0729 17:14:06.206521   11203 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:06.206540   11203 client.go:171] duration metric: took 230.807ms to LocalClient.Create
	I0729 17:14:06.343144   11203 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 17:14:06.360715   11203 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0729 17:14:06.379696   11203 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0729 17:14:06.395305   11203 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 17:14:06.424870   11203 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 17:14:06.453318   11203 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 17:14:06.482839   11203 cache.go:162] opening:  /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 17:14:06.605426   11203 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 17:14:06.605507   11203 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 656.14475ms
	I0729 17:14:06.605533   11203 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 17:14:08.206815   11203 start.go:128] duration metric: took 2.257053875s to createHost
	I0729 17:14:08.206929   11203 start.go:83] releasing machines lock for "no-preload-906000", held for 2.2572455s
	W0729 17:14:08.206999   11203 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:08.217413   11203 out.go:177] * Deleting "no-preload-906000" in qemu2 ...
	W0729 17:14:08.245785   11203 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:08.245822   11203 start.go:729] Will try again in 5 seconds ...
	I0729 17:14:08.767080   11203 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 17:14:08.767171   11203 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.817956792s
	I0729 17:14:08.767204   11203 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 17:14:09.147711   11203 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 17:14:09.147774   11203 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 3.198662792s
	I0729 17:14:09.147800   11203 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 17:14:09.647578   11203 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 17:14:09.647650   11203 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.698259958s
	I0729 17:14:09.647684   11203 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 17:14:10.254445   11203 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 17:14:10.254498   11203 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.305194292s
	I0729 17:14:10.254522   11203 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 17:14:10.285439   11203 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 17:14:10.285474   11203 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 4.3363725s
	I0729 17:14:10.285494   11203 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 17:14:13.248033   11203 start.go:360] acquireMachinesLock for no-preload-906000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:13.248456   11203 start.go:364] duration metric: took 331.166µs to acquireMachinesLock for "no-preload-906000"
	I0729 17:14:13.248564   11203 start.go:93] Provisioning new machine with config: &{Name:no-preload-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:14:13.248801   11203 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:14:13.260261   11203 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:14:13.311805   11203 start.go:159] libmachine.API.Create for "no-preload-906000" (driver="qemu2")
	I0729 17:14:13.311864   11203 client.go:168] LocalClient.Create starting
	I0729 17:14:13.311985   11203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:14:13.312046   11203 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:13.312071   11203 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:13.312141   11203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:14:13.312184   11203 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:13.312203   11203 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:13.312713   11203 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:14:13.475432   11203 main.go:141] libmachine: Creating SSH key...
	I0729 17:14:13.527687   11203 main.go:141] libmachine: Creating Disk image...
	I0729 17:14:13.527698   11203 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:14:13.527902   11203 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2
	I0729 17:14:13.537133   11203 main.go:141] libmachine: STDOUT: 
	I0729 17:14:13.537152   11203 main.go:141] libmachine: STDERR: 
	I0729 17:14:13.537214   11203 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2 +20000M
	I0729 17:14:13.545403   11203 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:14:13.545416   11203 main.go:141] libmachine: STDERR: 
	I0729 17:14:13.545435   11203 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2
	I0729 17:14:13.545439   11203 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:14:13.545454   11203 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:13.545489   11203 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:af:77:ec:ec:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2
	I0729 17:14:13.547177   11203 main.go:141] libmachine: STDOUT: 
	I0729 17:14:13.547192   11203 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:13.547208   11203 client.go:171] duration metric: took 235.33725ms to LocalClient.Create
	I0729 17:14:14.066131   11203 cache.go:157] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 17:14:14.066208   11203 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 8.11686075s
	I0729 17:14:14.066233   11203 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 17:14:14.066283   11203 cache.go:87] Successfully saved all images to host disk.
	I0729 17:14:15.548328   11203 start.go:128] duration metric: took 2.299501083s to createHost
	I0729 17:14:15.548375   11203 start.go:83] releasing machines lock for "no-preload-906000", held for 2.299895917s
	W0729 17:14:15.548734   11203 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-906000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-906000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:15.557227   11203 out.go:177] 
	W0729 17:14:15.565353   11203 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:14:15.565382   11203 out.go:239] * 
	* 
	W0729 17:14:15.567708   11203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:14:15.576276   11203 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-906000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000: exit status 7 (65.565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-906000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-906000 create -f testdata/busybox.yaml: exit status 1 (30.14825ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-906000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-906000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000: exit status 7 (30.481375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-906000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000: exit status 7 (30.056917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-906000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-906000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-906000 describe deploy/metrics-server -n kube-system: exit status 1 (26.630375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-906000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-906000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000: exit status 7 (29.9895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-906000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-906000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.180412167s)

                                                
                                                
-- stdout --
	* [no-preload-906000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-906000" primary control-plane node in "no-preload-906000" cluster
	* Restarting existing qemu2 VM for "no-preload-906000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-906000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:14:19.810036   11288 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:14:19.810162   11288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:19.810165   11288 out.go:304] Setting ErrFile to fd 2...
	I0729 17:14:19.810167   11288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:19.810295   11288 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:14:19.811258   11288 out.go:298] Setting JSON to false
	I0729 17:14:19.827316   11288 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6226,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:14:19.827393   11288 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:14:19.831259   11288 out.go:177] * [no-preload-906000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:14:19.838291   11288 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:14:19.838359   11288 notify.go:220] Checking for updates...
	I0729 17:14:19.846216   11288 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:14:19.849296   11288 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:14:19.852260   11288 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:14:19.855239   11288 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:14:19.858252   11288 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:14:19.861500   11288 config.go:182] Loaded profile config "no-preload-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 17:14:19.861772   11288 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:14:19.865177   11288 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 17:14:19.872219   11288 start.go:297] selected driver: qemu2
	I0729 17:14:19.872226   11288 start.go:901] validating driver "qemu2" against &{Name:no-preload-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:14:19.872298   11288 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:14:19.874485   11288 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:14:19.874509   11288 cni.go:84] Creating CNI manager for ""
	I0729 17:14:19.874516   11288 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:14:19.874544   11288 start.go:340] cluster config:
	{Name:no-preload-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-906000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:14:19.877937   11288 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:19.886068   11288 out.go:177] * Starting "no-preload-906000" primary control-plane node in "no-preload-906000" cluster
	I0729 17:14:19.890212   11288 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 17:14:19.890288   11288 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/no-preload-906000/config.json ...
	I0729 17:14:19.890316   11288 cache.go:107] acquiring lock: {Name:mk9240b7175255b32574e550cfc6f12b59ea4ef2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:19.890312   11288 cache.go:107] acquiring lock: {Name:mke00dafbbc7efe9c124c54d8e3aaae3232df4f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:19.890348   11288 cache.go:107] acquiring lock: {Name:mkb5d4774459854d753ea4e21612ec591b936345 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:19.890380   11288 cache.go:115] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 17:14:19.890386   11288 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 76.667µs
	I0729 17:14:19.890388   11288 cache.go:115] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 17:14:19.890392   11288 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 17:14:19.890394   11288 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 80.542µs
	I0729 17:14:19.890405   11288 cache.go:107] acquiring lock: {Name:mk44cf5f59662a80debcb69096aaee3fa343bf51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:19.890410   11288 cache.go:115] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 17:14:19.890420   11288 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 72.875µs
	I0729 17:14:19.890428   11288 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 17:14:19.890419   11288 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 17:14:19.890431   11288 cache.go:107] acquiring lock: {Name:mkfe2d7a8e99a82c34975c2be4321c048fed7415 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:19.890442   11288 cache.go:115] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 17:14:19.890439   11288 cache.go:107] acquiring lock: {Name:mk6ea0d3eba0d0b49170d68fefc4d776b7b68abd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:19.890446   11288 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 41.417µs
	I0729 17:14:19.890449   11288 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 17:14:19.890317   11288 cache.go:107] acquiring lock: {Name:mk311510ef6c7efe09bb41fd0c685c63b93f5571 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:19.890474   11288 cache.go:115] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 17:14:19.890480   11288 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 50.625µs
	I0729 17:14:19.890484   11288 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 17:14:19.890486   11288 cache.go:115] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 17:14:19.890491   11288 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 52.5µs
	I0729 17:14:19.890498   11288 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 17:14:19.890493   11288 cache.go:115] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 17:14:19.890503   11288 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 193.25µs
	I0729 17:14:19.890506   11288 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 17:14:19.890535   11288 cache.go:107] acquiring lock: {Name:mkc717d42dad0841ba4a47e4fc7f8c0bbdaa3ae7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:19.890579   11288 cache.go:115] /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 17:14:19.890583   11288 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 110.125µs
	I0729 17:14:19.890591   11288 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 17:14:19.890596   11288 cache.go:87] Successfully saved all images to host disk.
	I0729 17:14:19.890718   11288 start.go:360] acquireMachinesLock for no-preload-906000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:19.890771   11288 start.go:364] duration metric: took 45.958µs to acquireMachinesLock for "no-preload-906000"
	I0729 17:14:19.890780   11288 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:14:19.890785   11288 fix.go:54] fixHost starting: 
	I0729 17:14:19.890905   11288 fix.go:112] recreateIfNeeded on no-preload-906000: state=Stopped err=<nil>
	W0729 17:14:19.890918   11288 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:14:19.898263   11288 out.go:177] * Restarting existing qemu2 VM for "no-preload-906000" ...
	I0729 17:14:19.902242   11288 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:19.902289   11288 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:af:77:ec:ec:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2
	I0729 17:14:19.904383   11288 main.go:141] libmachine: STDOUT: 
	I0729 17:14:19.904405   11288 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:19.904432   11288 fix.go:56] duration metric: took 13.647459ms for fixHost
	I0729 17:14:19.904436   11288 start.go:83] releasing machines lock for "no-preload-906000", held for 13.660708ms
	W0729 17:14:19.904443   11288 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:14:19.904477   11288 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:19.904482   11288 start.go:729] Will try again in 5 seconds ...
	I0729 17:14:24.906700   11288 start.go:360] acquireMachinesLock for no-preload-906000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:24.907177   11288 start.go:364] duration metric: took 375.208µs to acquireMachinesLock for "no-preload-906000"
	I0729 17:14:24.907325   11288 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:14:24.907348   11288 fix.go:54] fixHost starting: 
	I0729 17:14:24.908161   11288 fix.go:112] recreateIfNeeded on no-preload-906000: state=Stopped err=<nil>
	W0729 17:14:24.908194   11288 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:14:24.911685   11288 out.go:177] * Restarting existing qemu2 VM for "no-preload-906000" ...
	I0729 17:14:24.918984   11288 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:24.919201   11288 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:af:77:ec:ec:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/no-preload-906000/disk.qcow2
	I0729 17:14:24.928422   11288 main.go:141] libmachine: STDOUT: 
	I0729 17:14:24.928499   11288 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:24.928621   11288 fix.go:56] duration metric: took 21.272875ms for fixHost
	I0729 17:14:24.928640   11288 start.go:83] releasing machines lock for "no-preload-906000", held for 21.4365ms
	W0729 17:14:24.928894   11288 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-906000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-906000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:24.936680   11288 out.go:177] 
	W0729 17:14:24.939675   11288 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:14:24.939708   11288 out.go:239] * 
	* 
	W0729 17:14:24.942490   11288 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:14:24.949647   11288 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-906000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000: exit status 7 (70.212708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-906000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000: exit status 7 (32.617292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-906000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-906000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-906000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.244041ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-906000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-906000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000: exit status 7 (30.812292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-906000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000: exit status 7 (30.643708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-906000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-906000 --alsologtostderr -v=1: exit status 83 (42.032292ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-906000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-906000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:14:25.225994   11311 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:14:25.226155   11311 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:25.226158   11311 out.go:304] Setting ErrFile to fd 2...
	I0729 17:14:25.226161   11311 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:25.226287   11311 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:14:25.226511   11311 out.go:298] Setting JSON to false
	I0729 17:14:25.226517   11311 mustload.go:65] Loading cluster: no-preload-906000
	I0729 17:14:25.226714   11311 config.go:182] Loaded profile config "no-preload-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 17:14:25.230584   11311 out.go:177] * The control-plane node no-preload-906000 host is not running: state=Stopped
	I0729 17:14:25.234523   11311 out.go:177]   To start a cluster, run: "minikube start -p no-preload-906000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-906000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000: exit status 7 (28.97675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-906000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000: exit status 7 (30.002583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-479000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-479000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (10.51852975s)

                                                
                                                
-- stdout --
	* [embed-certs-479000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-479000" primary control-plane node in "embed-certs-479000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-479000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:14:25.545793   11328 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:14:25.545940   11328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:25.545944   11328 out.go:304] Setting ErrFile to fd 2...
	I0729 17:14:25.545946   11328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:25.546080   11328 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:14:25.547182   11328 out.go:298] Setting JSON to false
	I0729 17:14:25.563258   11328 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6232,"bootTime":1722292233,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:14:25.563322   11328 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:14:25.567607   11328 out.go:177] * [embed-certs-479000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:14:25.574488   11328 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:14:25.574552   11328 notify.go:220] Checking for updates...
	I0729 17:14:25.581478   11328 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:14:25.584549   11328 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:14:25.587566   11328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:14:25.588995   11328 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:14:25.592539   11328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:14:25.595922   11328 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:14:25.595985   11328 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:14:25.596033   11328 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:14:25.600356   11328 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:14:25.607559   11328 start.go:297] selected driver: qemu2
	I0729 17:14:25.607566   11328 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:14:25.607572   11328 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:14:25.609789   11328 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:14:25.613345   11328 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:14:25.616569   11328 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:14:25.616612   11328 cni.go:84] Creating CNI manager for ""
	I0729 17:14:25.616621   11328 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:14:25.616626   11328 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:14:25.616670   11328 start.go:340] cluster config:
	{Name:embed-certs-479000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-479000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:14:25.620435   11328 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:25.628525   11328 out.go:177] * Starting "embed-certs-479000" primary control-plane node in "embed-certs-479000" cluster
	I0729 17:14:25.644555   11328 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:14:25.644573   11328 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:14:25.644587   11328 cache.go:56] Caching tarball of preloaded images
	I0729 17:14:25.644687   11328 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:14:25.644693   11328 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:14:25.644782   11328 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/embed-certs-479000/config.json ...
	I0729 17:14:25.644797   11328 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/embed-certs-479000/config.json: {Name:mk2d6b70ab4b964da930038b0c6da9b13a958b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:14:25.645258   11328 start.go:360] acquireMachinesLock for embed-certs-479000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:25.645310   11328 start.go:364] duration metric: took 42.917µs to acquireMachinesLock for "embed-certs-479000"
	I0729 17:14:25.645325   11328 start.go:93] Provisioning new machine with config: &{Name:embed-certs-479000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-479000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:14:25.645375   11328 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:14:25.653507   11328 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:14:25.672694   11328 start.go:159] libmachine.API.Create for "embed-certs-479000" (driver="qemu2")
	I0729 17:14:25.672732   11328 client.go:168] LocalClient.Create starting
	I0729 17:14:25.672798   11328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:14:25.672829   11328 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:25.672840   11328 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:25.672876   11328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:14:25.672902   11328 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:25.672911   11328 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:25.673341   11328 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:14:25.823266   11328 main.go:141] libmachine: Creating SSH key...
	I0729 17:14:26.191368   11328 main.go:141] libmachine: Creating Disk image...
	I0729 17:14:26.191380   11328 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:14:26.191636   11328 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2
	I0729 17:14:26.201264   11328 main.go:141] libmachine: STDOUT: 
	I0729 17:14:26.201282   11328 main.go:141] libmachine: STDERR: 
	I0729 17:14:26.201328   11328 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2 +20000M
	I0729 17:14:26.209141   11328 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:14:26.209172   11328 main.go:141] libmachine: STDERR: 
	I0729 17:14:26.209185   11328 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2
	I0729 17:14:26.209191   11328 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:14:26.209205   11328 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:26.209238   11328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:98:f8:65:45:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2
	I0729 17:14:26.210952   11328 main.go:141] libmachine: STDOUT: 
	I0729 17:14:26.210970   11328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:26.210996   11328 client.go:171] duration metric: took 538.259416ms to LocalClient.Create
	I0729 17:14:28.213217   11328 start.go:128] duration metric: took 2.567805208s to createHost
	I0729 17:14:28.213278   11328 start.go:83] releasing machines lock for "embed-certs-479000", held for 2.567958083s
	W0729 17:14:28.213365   11328 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:28.230625   11328 out.go:177] * Deleting "embed-certs-479000" in qemu2 ...
	W0729 17:14:28.256264   11328 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:28.256301   11328 start.go:729] Will try again in 5 seconds ...
	I0729 17:14:33.258607   11328 start.go:360] acquireMachinesLock for embed-certs-479000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:33.259179   11328 start.go:364] duration metric: took 432µs to acquireMachinesLock for "embed-certs-479000"
	I0729 17:14:33.259362   11328 start.go:93] Provisioning new machine with config: &{Name:embed-certs-479000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-479000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:14:33.259628   11328 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:14:33.265454   11328 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:14:33.314703   11328 start.go:159] libmachine.API.Create for "embed-certs-479000" (driver="qemu2")
	I0729 17:14:33.314753   11328 client.go:168] LocalClient.Create starting
	I0729 17:14:33.314875   11328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:14:33.314952   11328 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:33.314967   11328 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:33.315033   11328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:14:33.315076   11328 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:33.315091   11328 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:33.315957   11328 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:14:33.477270   11328 main.go:141] libmachine: Creating SSH key...
	I0729 17:14:33.970838   11328 main.go:141] libmachine: Creating Disk image...
	I0729 17:14:33.970851   11328 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:14:33.971111   11328 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2
	I0729 17:14:33.980504   11328 main.go:141] libmachine: STDOUT: 
	I0729 17:14:33.980527   11328 main.go:141] libmachine: STDERR: 
	I0729 17:14:33.980586   11328 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2 +20000M
	I0729 17:14:33.988395   11328 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:14:33.988409   11328 main.go:141] libmachine: STDERR: 
	I0729 17:14:33.988423   11328 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2
	I0729 17:14:33.988429   11328 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:14:33.988442   11328 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:33.988477   11328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:2b:29:a4:5d:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2
	I0729 17:14:33.990118   11328 main.go:141] libmachine: STDOUT: 
	I0729 17:14:33.990132   11328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:33.990146   11328 client.go:171] duration metric: took 675.38775ms to LocalClient.Create
	I0729 17:14:35.992365   11328 start.go:128] duration metric: took 2.732678166s to createHost
	I0729 17:14:35.992434   11328 start.go:83] releasing machines lock for "embed-certs-479000", held for 2.733231583s
	W0729 17:14:35.992865   11328 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-479000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-479000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:36.005452   11328 out.go:177] 
	W0729 17:14:36.010486   11328 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:14:36.010529   11328 out.go:239] * 
	* 
	W0729 17:14:36.013171   11328 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:14:36.021448   11328 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-479000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000: exit status 7 (65.958375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-479000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-479000 create -f testdata/busybox.yaml: exit status 1 (29.608917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-479000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-479000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000: exit status 7 (30.421791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-479000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000: exit status 7 (29.542709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-479000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-479000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-479000 describe deploy/metrics-server -n kube-system: exit status 1 (26.370542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-479000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-479000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000: exit status 7 (30.817375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-479000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-479000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.184037s)

                                                
                                                
-- stdout --
	* [embed-certs-479000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-479000" primary control-plane node in "embed-certs-479000" cluster
	* Restarting existing qemu2 VM for "embed-certs-479000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-479000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:14:38.431540   11379 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:14:38.431677   11379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:38.431682   11379 out.go:304] Setting ErrFile to fd 2...
	I0729 17:14:38.431684   11379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:38.431807   11379 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:14:38.432781   11379 out.go:298] Setting JSON to false
	I0729 17:14:38.448674   11379 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6245,"bootTime":1722292233,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:14:38.448735   11379 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:14:38.454062   11379 out.go:177] * [embed-certs-479000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:14:38.461007   11379 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:14:38.461053   11379 notify.go:220] Checking for updates...
	I0729 17:14:38.467965   11379 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:14:38.470955   11379 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:14:38.473990   11379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:14:38.475384   11379 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:14:38.479025   11379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:14:38.482221   11379 config.go:182] Loaded profile config "embed-certs-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:14:38.482468   11379 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:14:38.484176   11379 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 17:14:38.491032   11379 start.go:297] selected driver: qemu2
	I0729 17:14:38.491040   11379 start.go:901] validating driver "qemu2" against &{Name:embed-certs-479000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-479000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:14:38.491103   11379 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:14:38.493207   11379 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:14:38.493254   11379 cni.go:84] Creating CNI manager for ""
	I0729 17:14:38.493261   11379 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:14:38.493281   11379 start.go:340] cluster config:
	{Name:embed-certs-479000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-479000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:14:38.496477   11379 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:38.505043   11379 out.go:177] * Starting "embed-certs-479000" primary control-plane node in "embed-certs-479000" cluster
	I0729 17:14:38.508966   11379 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:14:38.508979   11379 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:14:38.508987   11379 cache.go:56] Caching tarball of preloaded images
	I0729 17:14:38.509040   11379 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:14:38.509046   11379 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:14:38.509095   11379 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/embed-certs-479000/config.json ...
	I0729 17:14:38.509537   11379 start.go:360] acquireMachinesLock for embed-certs-479000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:38.509564   11379 start.go:364] duration metric: took 21.375µs to acquireMachinesLock for "embed-certs-479000"
	I0729 17:14:38.509572   11379 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:14:38.509578   11379 fix.go:54] fixHost starting: 
	I0729 17:14:38.509697   11379 fix.go:112] recreateIfNeeded on embed-certs-479000: state=Stopped err=<nil>
	W0729 17:14:38.509705   11379 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:14:38.518001   11379 out.go:177] * Restarting existing qemu2 VM for "embed-certs-479000" ...
	I0729 17:14:38.521950   11379 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:38.521986   11379 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:2b:29:a4:5d:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2
	I0729 17:14:38.523903   11379 main.go:141] libmachine: STDOUT: 
	I0729 17:14:38.523922   11379 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:38.523949   11379 fix.go:56] duration metric: took 14.372459ms for fixHost
	I0729 17:14:38.523953   11379 start.go:83] releasing machines lock for "embed-certs-479000", held for 14.385042ms
	W0729 17:14:38.523961   11379 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:14:38.523999   11379 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:38.524004   11379 start.go:729] Will try again in 5 seconds ...
	I0729 17:14:43.526227   11379 start.go:360] acquireMachinesLock for embed-certs-479000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:43.526664   11379 start.go:364] duration metric: took 316.041µs to acquireMachinesLock for "embed-certs-479000"
	I0729 17:14:43.526816   11379 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:14:43.526837   11379 fix.go:54] fixHost starting: 
	I0729 17:14:43.527592   11379 fix.go:112] recreateIfNeeded on embed-certs-479000: state=Stopped err=<nil>
	W0729 17:14:43.527619   11379 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:14:43.536169   11379 out.go:177] * Restarting existing qemu2 VM for "embed-certs-479000" ...
	I0729 17:14:43.541139   11379 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:43.541338   11379 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:2b:29:a4:5d:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/embed-certs-479000/disk.qcow2
	I0729 17:14:43.550900   11379 main.go:141] libmachine: STDOUT: 
	I0729 17:14:43.550952   11379 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:43.551049   11379 fix.go:56] duration metric: took 24.212833ms for fixHost
	I0729 17:14:43.551066   11379 start.go:83] releasing machines lock for "embed-certs-479000", held for 24.375375ms
	W0729 17:14:43.551324   11379 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-479000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-479000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:43.560152   11379 out.go:177] 
	W0729 17:14:43.563072   11379 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:14:43.563095   11379 out.go:239] * 
	* 
	W0729 17:14:43.565923   11379 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:14:43.575082   11379 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-479000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000: exit status 7 (69.990209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-479000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000: exit status 7 (32.160083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-479000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-479000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-479000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.401791ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-479000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-479000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000: exit status 7 (29.608333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-479000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000: exit status 7 (30.22925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-479000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-479000 --alsologtostderr -v=1: exit status 83 (42.057667ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-479000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-479000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:14:43.843831   11402 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:14:43.843993   11402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:43.843997   11402 out.go:304] Setting ErrFile to fd 2...
	I0729 17:14:43.843999   11402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:43.844136   11402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:14:43.844342   11402 out.go:298] Setting JSON to false
	I0729 17:14:43.844348   11402 mustload.go:65] Loading cluster: embed-certs-479000
	I0729 17:14:43.844531   11402 config.go:182] Loaded profile config "embed-certs-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:14:43.848673   11402 out.go:177] * The control-plane node embed-certs-479000 host is not running: state=Stopped
	I0729 17:14:43.852428   11402 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-479000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-479000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000: exit status 7 (29.134584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-479000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000: exit status 7 (30.164375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-294000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-294000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (10.179124875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-294000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-294000" primary control-plane node in "default-k8s-diff-port-294000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-294000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:14:44.264340   11426 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:14:44.264481   11426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:44.264484   11426 out.go:304] Setting ErrFile to fd 2...
	I0729 17:14:44.264487   11426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:44.264621   11426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:14:44.265689   11426 out.go:298] Setting JSON to false
	I0729 17:14:44.281996   11426 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6251,"bootTime":1722292233,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:14:44.282059   11426 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:14:44.286679   11426 out.go:177] * [default-k8s-diff-port-294000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:14:44.292644   11426 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:14:44.292692   11426 notify.go:220] Checking for updates...
	I0729 17:14:44.300558   11426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:14:44.303632   11426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:14:44.307741   11426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:14:44.310616   11426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:14:44.313583   11426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:14:44.316916   11426 config.go:182] Loaded profile config "cert-expiration-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:14:44.316985   11426 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:14:44.317037   11426 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:14:44.320618   11426 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:14:44.327582   11426 start.go:297] selected driver: qemu2
	I0729 17:14:44.327587   11426 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:14:44.327593   11426 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:14:44.329946   11426 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:14:44.332540   11426 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:14:44.336659   11426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:14:44.336704   11426 cni.go:84] Creating CNI manager for ""
	I0729 17:14:44.336716   11426 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:14:44.336722   11426 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:14:44.336762   11426 start.go:340] cluster config:
	{Name:default-k8s-diff-port-294000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:14:44.340509   11426 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:44.348622   11426 out.go:177] * Starting "default-k8s-diff-port-294000" primary control-plane node in "default-k8s-diff-port-294000" cluster
	I0729 17:14:44.352622   11426 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:14:44.352647   11426 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:14:44.352659   11426 cache.go:56] Caching tarball of preloaded images
	I0729 17:14:44.352729   11426 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:14:44.352735   11426 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:14:44.352812   11426 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/default-k8s-diff-port-294000/config.json ...
	I0729 17:14:44.352824   11426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/default-k8s-diff-port-294000/config.json: {Name:mke0c021ae60a03fe10c154ac5384d68a4ea2b18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:14:44.353202   11426 start.go:360] acquireMachinesLock for default-k8s-diff-port-294000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:44.353240   11426 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "default-k8s-diff-port-294000"
	I0729 17:14:44.353251   11426 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:14:44.353280   11426 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:14:44.361601   11426 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:14:44.380195   11426 start.go:159] libmachine.API.Create for "default-k8s-diff-port-294000" (driver="qemu2")
	I0729 17:14:44.380225   11426 client.go:168] LocalClient.Create starting
	I0729 17:14:44.380290   11426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:14:44.380322   11426 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:44.380330   11426 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:44.380374   11426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:14:44.380401   11426 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:44.380412   11426 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:44.380872   11426 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:14:44.617524   11426 main.go:141] libmachine: Creating SSH key...
	I0729 17:14:44.812275   11426 main.go:141] libmachine: Creating Disk image...
	I0729 17:14:44.812282   11426 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:14:44.812492   11426 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0729 17:14:44.822015   11426 main.go:141] libmachine: STDOUT: 
	I0729 17:14:44.822033   11426 main.go:141] libmachine: STDERR: 
	I0729 17:14:44.822092   11426 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2 +20000M
	I0729 17:14:44.829856   11426 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:14:44.829869   11426 main.go:141] libmachine: STDERR: 
	I0729 17:14:44.829891   11426 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0729 17:14:44.829895   11426 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:14:44.829905   11426 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:44.829929   11426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:e8:33:7d:97:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0729 17:14:44.831505   11426 main.go:141] libmachine: STDOUT: 
	I0729 17:14:44.831517   11426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:44.831536   11426 client.go:171] duration metric: took 451.306ms to LocalClient.Create
	I0729 17:14:46.833767   11426 start.go:128] duration metric: took 2.4804705s to createHost
	I0729 17:14:46.833830   11426 start.go:83] releasing machines lock for "default-k8s-diff-port-294000", held for 2.480582708s
	W0729 17:14:46.833890   11426 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:46.849693   11426 out.go:177] * Deleting "default-k8s-diff-port-294000" in qemu2 ...
	W0729 17:14:46.879133   11426 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:46.879184   11426 start.go:729] Will try again in 5 seconds ...
	I0729 17:14:51.881390   11426 start.go:360] acquireMachinesLock for default-k8s-diff-port-294000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:51.891536   11426 start.go:364] duration metric: took 10.03725ms to acquireMachinesLock for "default-k8s-diff-port-294000"
	I0729 17:14:51.891592   11426 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:14:51.891807   11426 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:14:51.902682   11426 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:14:51.950317   11426 start.go:159] libmachine.API.Create for "default-k8s-diff-port-294000" (driver="qemu2")
	I0729 17:14:51.950360   11426 client.go:168] LocalClient.Create starting
	I0729 17:14:51.950467   11426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:14:51.950535   11426 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:51.950556   11426 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:51.950619   11426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:14:51.950663   11426 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:51.950676   11426 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:51.951163   11426 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:14:52.231361   11426 main.go:141] libmachine: Creating SSH key...
	I0729 17:14:52.353707   11426 main.go:141] libmachine: Creating Disk image...
	I0729 17:14:52.353713   11426 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:14:52.353881   11426 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0729 17:14:52.363018   11426 main.go:141] libmachine: STDOUT: 
	I0729 17:14:52.363038   11426 main.go:141] libmachine: STDERR: 
	I0729 17:14:52.363100   11426 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2 +20000M
	I0729 17:14:52.370850   11426 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:14:52.370865   11426 main.go:141] libmachine: STDERR: 
	I0729 17:14:52.370884   11426 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0729 17:14:52.370890   11426 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:14:52.370902   11426 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:52.370941   11426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:2e:36:18:0a:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0729 17:14:52.372607   11426 main.go:141] libmachine: STDOUT: 
	I0729 17:14:52.372622   11426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:52.372636   11426 client.go:171] duration metric: took 422.27175ms to LocalClient.Create
	I0729 17:14:54.374905   11426 start.go:128] duration metric: took 2.48306575s to createHost
	I0729 17:14:54.374975   11426 start.go:83] releasing machines lock for "default-k8s-diff-port-294000", held for 2.483414417s
	W0729 17:14:54.375226   11426 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-294000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-294000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:54.388759   11426 out.go:177] 
	W0729 17:14:54.392902   11426 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:14:54.392936   11426 out.go:239] * 
	* 
	W0729 17:14:54.394931   11426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:14:54.404837   11426 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-294000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (50.227916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-051000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-051000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (12.034979917s)

                                                
                                                
-- stdout --
	* [newest-cni-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-051000" primary control-plane node in "newest-cni-051000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-051000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:14:52.111496   11453 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:14:52.112017   11453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:52.112030   11453 out.go:304] Setting ErrFile to fd 2...
	I0729 17:14:52.112038   11453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:52.112665   11453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:14:52.114434   11453 out.go:298] Setting JSON to false
	I0729 17:14:52.134149   11453 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6259,"bootTime":1722292233,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:14:52.134239   11453 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:14:52.153939   11453 out.go:177] * [newest-cni-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:14:52.163946   11453 notify.go:220] Checking for updates...
	I0729 17:14:52.169869   11453 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:14:52.181915   11453 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:14:52.189688   11453 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:14:52.197869   11453 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:14:52.204865   11453 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:14:52.211777   11453 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:14:52.217298   11453 config.go:182] Loaded profile config "default-k8s-diff-port-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:14:52.217378   11453 config.go:182] Loaded profile config "multinode-877000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:14:52.217441   11453 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:14:52.222855   11453 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 17:14:52.229839   11453 start.go:297] selected driver: qemu2
	I0729 17:14:52.229846   11453 start.go:901] validating driver "qemu2" against <nil>
	I0729 17:14:52.229857   11453 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:14:52.232771   11453 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 17:14:52.232804   11453 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 17:14:52.241831   11453 out.go:177] * Automatically selected the socket_vmnet network
	I0729 17:14:52.244841   11453 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 17:14:52.244875   11453 cni.go:84] Creating CNI manager for ""
	I0729 17:14:52.244884   11453 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:14:52.244889   11453 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:14:52.244913   11453 start.go:340] cluster config:
	{Name:newest-cni-051000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:14:52.248629   11453 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:52.256812   11453 out.go:177] * Starting "newest-cni-051000" primary control-plane node in "newest-cni-051000" cluster
	I0729 17:14:52.260794   11453 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 17:14:52.260808   11453 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 17:14:52.260815   11453 cache.go:56] Caching tarball of preloaded images
	I0729 17:14:52.260862   11453 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:14:52.260867   11453 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 17:14:52.260928   11453 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/newest-cni-051000/config.json ...
	I0729 17:14:52.260938   11453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/newest-cni-051000/config.json: {Name:mk3fa63e1f43f52358e309978d194df8e6cd9623 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:14:52.261203   11453 start.go:360] acquireMachinesLock for newest-cni-051000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:54.375088   11453 start.go:364] duration metric: took 2.113859583s to acquireMachinesLock for "newest-cni-051000"
	I0729 17:14:54.375274   11453 start.go:93] Provisioning new machine with config: &{Name:newest-cni-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:14:54.375564   11453 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:14:54.384839   11453 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:14:54.432860   11453 start.go:159] libmachine.API.Create for "newest-cni-051000" (driver="qemu2")
	I0729 17:14:54.432908   11453 client.go:168] LocalClient.Create starting
	I0729 17:14:54.433022   11453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:14:54.433087   11453 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:54.433106   11453 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:54.433172   11453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:14:54.433221   11453 main.go:141] libmachine: Decoding PEM data...
	I0729 17:14:54.433232   11453 main.go:141] libmachine: Parsing certificate...
	I0729 17:14:54.433830   11453 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:14:54.595630   11453 main.go:141] libmachine: Creating SSH key...
	I0729 17:14:54.649762   11453 main.go:141] libmachine: Creating Disk image...
	I0729 17:14:54.649772   11453 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:14:54.649977   11453 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2
	I0729 17:14:54.659976   11453 main.go:141] libmachine: STDOUT: 
	I0729 17:14:54.660003   11453 main.go:141] libmachine: STDERR: 
	I0729 17:14:54.660081   11453 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2 +20000M
	I0729 17:14:54.668982   11453 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:14:54.669008   11453 main.go:141] libmachine: STDERR: 
	I0729 17:14:54.669026   11453 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2
	I0729 17:14:54.669032   11453 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:14:54.669046   11453 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:54.669079   11453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:40:61:25:91:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2
	I0729 17:14:54.671054   11453 main.go:141] libmachine: STDOUT: 
	I0729 17:14:54.671075   11453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:54.671093   11453 client.go:171] duration metric: took 238.179084ms to LocalClient.Create
	I0729 17:14:56.673277   11453 start.go:128] duration metric: took 2.29768475s to createHost
	I0729 17:14:56.673395   11453 start.go:83] releasing machines lock for "newest-cni-051000", held for 2.298237584s
	W0729 17:14:56.673464   11453 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:56.680704   11453 out.go:177] * Deleting "newest-cni-051000" in qemu2 ...
	W0729 17:14:56.709588   11453 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:56.709617   11453 start.go:729] Will try again in 5 seconds ...
	I0729 17:15:01.711913   11453 start.go:360] acquireMachinesLock for newest-cni-051000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:15:01.712445   11453 start.go:364] duration metric: took 316.625µs to acquireMachinesLock for "newest-cni-051000"
	I0729 17:15:01.712591   11453 start.go:93] Provisioning new machine with config: &{Name:newest-cni-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 17:15:01.712965   11453 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 17:15:01.721541   11453 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:15:01.771228   11453 start.go:159] libmachine.API.Create for "newest-cni-051000" (driver="qemu2")
	I0729 17:15:01.771283   11453 client.go:168] LocalClient.Create starting
	I0729 17:15:01.771429   11453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/ca.pem
	I0729 17:15:01.771491   11453 main.go:141] libmachine: Decoding PEM data...
	I0729 17:15:01.771509   11453 main.go:141] libmachine: Parsing certificate...
	I0729 17:15:01.771573   11453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19346-7076/.minikube/certs/cert.pem
	I0729 17:15:01.771616   11453 main.go:141] libmachine: Decoding PEM data...
	I0729 17:15:01.771630   11453 main.go:141] libmachine: Parsing certificate...
	I0729 17:15:01.772328   11453 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 17:15:01.933316   11453 main.go:141] libmachine: Creating SSH key...
	I0729 17:15:02.028540   11453 main.go:141] libmachine: Creating Disk image...
	I0729 17:15:02.028545   11453 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 17:15:02.028783   11453 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2.raw /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2
	I0729 17:15:02.038182   11453 main.go:141] libmachine: STDOUT: 
	I0729 17:15:02.038201   11453 main.go:141] libmachine: STDERR: 
	I0729 17:15:02.038259   11453 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2 +20000M
	I0729 17:15:02.046163   11453 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 17:15:02.046184   11453 main.go:141] libmachine: STDERR: 
	I0729 17:15:02.046197   11453 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2
	I0729 17:15:02.046203   11453 main.go:141] libmachine: Starting QEMU VM...
	I0729 17:15:02.046212   11453 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:15:02.046259   11453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:13:24:f4:59:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2
	I0729 17:15:02.047890   11453 main.go:141] libmachine: STDOUT: 
	I0729 17:15:02.047904   11453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:15:02.047920   11453 client.go:171] duration metric: took 276.630709ms to LocalClient.Create
	I0729 17:15:04.050195   11453 start.go:128] duration metric: took 2.337192625s to createHost
	I0729 17:15:04.050282   11453 start.go:83] releasing machines lock for "newest-cni-051000", held for 2.3378095s
	W0729 17:15:04.050633   11453 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-051000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-051000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:15:04.055356   11453 out.go:177] 
	W0729 17:15:04.071374   11453 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:15:04.071399   11453 out.go:239] * 
	* 
	W0729 17:15:04.074110   11453 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:15:04.087208   11453 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-051000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-051000 -n newest-cni-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-051000 -n newest-cni-051000: exit status 7 (69.473667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (12.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-294000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-294000 create -f testdata/busybox.yaml: exit status 1 (31.120541ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-294000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-294000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (33.33475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (33.455916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-294000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-294000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-294000 describe deploy/metrics-server -n kube-system: exit status 1 (27.627875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-294000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-294000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (28.963125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-294000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-294000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.105207041s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-294000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-294000" primary control-plane node in "default-k8s-diff-port-294000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-294000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-294000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:14:58.047975   11504 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:14:58.048101   11504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:58.048103   11504 out.go:304] Setting ErrFile to fd 2...
	I0729 17:14:58.048106   11504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:14:58.048239   11504 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:14:58.049225   11504 out.go:298] Setting JSON to false
	I0729 17:14:58.065090   11504 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6265,"bootTime":1722292233,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:14:58.065156   11504 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:14:58.068822   11504 out.go:177] * [default-k8s-diff-port-294000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:14:58.075731   11504 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:14:58.075798   11504 notify.go:220] Checking for updates...
	I0729 17:14:58.082745   11504 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:14:58.085743   11504 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:14:58.088766   11504 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:14:58.091651   11504 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:14:58.094681   11504 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:14:58.097985   11504 config.go:182] Loaded profile config "default-k8s-diff-port-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:14:58.098241   11504 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:14:58.101620   11504 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 17:14:58.108746   11504 start.go:297] selected driver: qemu2
	I0729 17:14:58.108755   11504 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:14:58.108837   11504 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:14:58.111037   11504 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:14:58.111081   11504 cni.go:84] Creating CNI manager for ""
	I0729 17:14:58.111089   11504 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:14:58.111122   11504 start.go:340] cluster config:
	{Name:default-k8s-diff-port-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-294000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:14:58.114531   11504 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:14:58.123777   11504 out.go:177] * Starting "default-k8s-diff-port-294000" primary control-plane node in "default-k8s-diff-port-294000" cluster
	I0729 17:14:58.128702   11504 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 17:14:58.128720   11504 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 17:14:58.128737   11504 cache.go:56] Caching tarball of preloaded images
	I0729 17:14:58.128809   11504 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:14:58.128815   11504 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 17:14:58.128885   11504 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/default-k8s-diff-port-294000/config.json ...
	I0729 17:14:58.129386   11504 start.go:360] acquireMachinesLock for default-k8s-diff-port-294000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:14:58.129432   11504 start.go:364] duration metric: took 40.375µs to acquireMachinesLock for "default-k8s-diff-port-294000"
	I0729 17:14:58.129441   11504 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:14:58.129448   11504 fix.go:54] fixHost starting: 
	I0729 17:14:58.129572   11504 fix.go:112] recreateIfNeeded on default-k8s-diff-port-294000: state=Stopped err=<nil>
	W0729 17:14:58.129580   11504 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:14:58.133739   11504 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-294000" ...
	I0729 17:14:58.141685   11504 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:14:58.141720   11504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:2e:36:18:0a:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0729 17:14:58.143805   11504 main.go:141] libmachine: STDOUT: 
	I0729 17:14:58.143826   11504 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:14:58.143855   11504 fix.go:56] duration metric: took 14.40775ms for fixHost
	I0729 17:14:58.143860   11504 start.go:83] releasing machines lock for "default-k8s-diff-port-294000", held for 14.423125ms
	W0729 17:14:58.143867   11504 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:14:58.143903   11504 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:14:58.143908   11504 start.go:729] Will try again in 5 seconds ...
	I0729 17:15:03.146128   11504 start.go:360] acquireMachinesLock for default-k8s-diff-port-294000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:15:04.050461   11504 start.go:364] duration metric: took 904.213333ms to acquireMachinesLock for "default-k8s-diff-port-294000"
	I0729 17:15:04.050629   11504 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:15:04.050655   11504 fix.go:54] fixHost starting: 
	I0729 17:15:04.051417   11504 fix.go:112] recreateIfNeeded on default-k8s-diff-port-294000: state=Stopped err=<nil>
	W0729 17:15:04.051453   11504 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:15:04.067202   11504 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-294000" ...
	I0729 17:15:04.075282   11504 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:15:04.075499   11504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:2e:36:18:0a:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0729 17:15:04.084505   11504 main.go:141] libmachine: STDOUT: 
	I0729 17:15:04.084559   11504 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:15:04.084633   11504 fix.go:56] duration metric: took 33.982708ms for fixHost
	I0729 17:15:04.084655   11504 start.go:83] releasing machines lock for "default-k8s-diff-port-294000", held for 34.137334ms
	W0729 17:15:04.084825   11504 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-294000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-294000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:15:04.098260   11504 out.go:177] 
	W0729 17:15:04.102338   11504 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:15:04.102379   11504 out.go:239] * 
	* 
	W0729 17:15:04.105416   11504 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:15:04.115205   11504 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-294000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (58.772917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-294000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (34.792542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-294000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-294000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-294000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.907041ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-294000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-294000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (31.486042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-294000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (29.146625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-294000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-294000 --alsologtostderr -v=1: exit status 83 (40.470958ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-294000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-294000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:15:04.377660   11535 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:15:04.377812   11535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:15:04.377815   11535 out.go:304] Setting ErrFile to fd 2...
	I0729 17:15:04.377818   11535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:15:04.377985   11535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:15:04.378209   11535 out.go:298] Setting JSON to false
	I0729 17:15:04.378216   11535 mustload.go:65] Loading cluster: default-k8s-diff-port-294000
	I0729 17:15:04.378420   11535 config.go:182] Loaded profile config "default-k8s-diff-port-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 17:15:04.381941   11535 out.go:177] * The control-plane node default-k8s-diff-port-294000 host is not running: state=Stopped
	I0729 17:15:04.385814   11535 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-294000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-294000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (29.044709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (29.14675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-051000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-051000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.187025542s)

                                                
                                                
-- stdout --
	* [newest-cni-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-051000" primary control-plane node in "newest-cni-051000" cluster
	* Restarting existing qemu2 VM for "newest-cni-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:15:07.506268   11570 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:15:07.506414   11570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:15:07.506420   11570 out.go:304] Setting ErrFile to fd 2...
	I0729 17:15:07.506423   11570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:15:07.506553   11570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:15:07.507545   11570 out.go:298] Setting JSON to false
	I0729 17:15:07.523933   11570 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6274,"bootTime":1722292233,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 17:15:07.524010   11570 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 17:15:07.529193   11570 out.go:177] * [newest-cni-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 17:15:07.536220   11570 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 17:15:07.536285   11570 notify.go:220] Checking for updates...
	I0729 17:15:07.544140   11570 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 17:15:07.545664   11570 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 17:15:07.549153   11570 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:15:07.552157   11570 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 17:15:07.555178   11570 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:15:07.558386   11570 config.go:182] Loaded profile config "newest-cni-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 17:15:07.558684   11570 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:15:07.563164   11570 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 17:15:07.570127   11570 start.go:297] selected driver: qemu2
	I0729 17:15:07.570133   11570 start.go:901] validating driver "qemu2" against &{Name:newest-cni-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:15:07.570187   11570 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:15:07.572670   11570 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 17:15:07.572694   11570 cni.go:84] Creating CNI manager for ""
	I0729 17:15:07.572701   11570 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 17:15:07.572731   11570 start.go:340] cluster config:
	{Name:newest-cni-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-051000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:15:07.576376   11570 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:15:07.585080   11570 out.go:177] * Starting "newest-cni-051000" primary control-plane node in "newest-cni-051000" cluster
	I0729 17:15:07.589171   11570 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 17:15:07.589186   11570 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 17:15:07.589200   11570 cache.go:56] Caching tarball of preloaded images
	I0729 17:15:07.589264   11570 preload.go:172] Found /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 17:15:07.589271   11570 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 17:15:07.589328   11570 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/newest-cni-051000/config.json ...
	I0729 17:15:07.589792   11570 start.go:360] acquireMachinesLock for newest-cni-051000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:15:07.589824   11570 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "newest-cni-051000"
	I0729 17:15:07.589832   11570 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:15:07.589838   11570 fix.go:54] fixHost starting: 
	I0729 17:15:07.589946   11570 fix.go:112] recreateIfNeeded on newest-cni-051000: state=Stopped err=<nil>
	W0729 17:15:07.589953   11570 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:15:07.594112   11570 out.go:177] * Restarting existing qemu2 VM for "newest-cni-051000" ...
	I0729 17:15:07.599647   11570 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:15:07.599688   11570 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:13:24:f4:59:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2
	I0729 17:15:07.601723   11570 main.go:141] libmachine: STDOUT: 
	I0729 17:15:07.601743   11570 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:15:07.601768   11570 fix.go:56] duration metric: took 11.931ms for fixHost
	I0729 17:15:07.601772   11570 start.go:83] releasing machines lock for "newest-cni-051000", held for 11.943625ms
	W0729 17:15:07.601780   11570 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:15:07.601823   11570 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:15:07.601828   11570 start.go:729] Will try again in 5 seconds ...
	I0729 17:15:12.604152   11570 start.go:360] acquireMachinesLock for newest-cni-051000: {Name:mkdc5dd7226f19ea95e8545186b85714d05af01a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:15:12.604523   11570 start.go:364] duration metric: took 279µs to acquireMachinesLock for "newest-cni-051000"
	I0729 17:15:12.604663   11570 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:15:12.604685   11570 fix.go:54] fixHost starting: 
	I0729 17:15:12.605452   11570 fix.go:112] recreateIfNeeded on newest-cni-051000: state=Stopped err=<nil>
	W0729 17:15:12.605482   11570 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:15:12.614953   11570 out.go:177] * Restarting existing qemu2 VM for "newest-cni-051000" ...
	I0729 17:15:12.617874   11570 qemu.go:418] Using hvf for hardware acceleration
	I0729 17:15:12.618080   11570 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:13:24:f4:59:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19346-7076/.minikube/machines/newest-cni-051000/disk.qcow2
	I0729 17:15:12.627562   11570 main.go:141] libmachine: STDOUT: 
	I0729 17:15:12.627637   11570 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 17:15:12.627732   11570 fix.go:56] duration metric: took 23.048917ms for fixHost
	I0729 17:15:12.627751   11570 start.go:83] releasing machines lock for "newest-cni-051000", held for 23.205334ms
	W0729 17:15:12.627936   11570 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 17:15:12.636888   11570 out.go:177] 
	W0729 17:15:12.640920   11570 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 17:15:12.640945   11570 out.go:239] * 
	* 
	W0729 17:15:12.643407   11570 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:15:12.651846   11570 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-051000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-051000 -n newest-cni-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-051000 -n newest-cni-051000: exit status 7 (68.0025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-051000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-051000 -n newest-cni-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-051000 -n newest-cni-051000: exit status 7 (29.484708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-051000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-051000 --alsologtostderr -v=1: exit status 83 (42.146458ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-051000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:15:12.836159   11588 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:15:12.836296   11588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:15:12.836302   11588 out.go:304] Setting ErrFile to fd 2...
	I0729 17:15:12.836305   11588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:15:12.836433   11588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 17:15:12.836654   11588 out.go:298] Setting JSON to false
	I0729 17:15:12.836660   11588 mustload.go:65] Loading cluster: newest-cni-051000
	I0729 17:15:12.836863   11588 config.go:182] Loaded profile config "newest-cni-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 17:15:12.841748   11588 out.go:177] * The control-plane node newest-cni-051000 host is not running: state=Stopped
	I0729 17:15:12.845768   11588 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-051000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-051000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-051000 -n newest-cni-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-051000 -n newest-cni-051000: exit status 7 (29.682125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-051000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-051000 -n newest-cni-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-051000 -n newest-cni-051000: exit status 7 (30.347833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 14.05
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 15.01
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.1
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.29
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 11.29
48 TestErrorSpam/start 0.4
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 9.65
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.74
64 TestFunctional/serial/CacheCmd/cache/add_local 1.06
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.23
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.1
102 TestFunctional/parallel/License 0.28
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
116 TestFunctional/parallel/ProfileCmd/profile_list 0.08
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
121 TestFunctional/parallel/Version/short 0.04
128 TestFunctional/parallel/ImageCommands/Setup 1.85
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.14
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.2
202 TestMainNoArgs 0.03
247 TestStoppedBinaryUpgrade/Setup 1.46
249 TestStoppedBinaryUpgrade/MinikubeLogs 0.92
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
265 TestNoKubernetes/serial/ProfileList 15.71
266 TestNoKubernetes/serial/Stop 3.2
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
284 TestStartStop/group/old-k8s-version/serial/Stop 3.42
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
295 TestStartStop/group/no-preload/serial/Stop 3.79
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
306 TestStartStop/group/embed-certs/serial/Stop 1.98
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.21
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
326 TestStartStop/group/newest-cni/serial/Stop 3.11
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-017000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-017000: exit status 85 (94.751041ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |          |
	|         | -p download-only-017000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:47:22
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:47:22.669599    7567 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:47:22.669732    7567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:47:22.669735    7567 out.go:304] Setting ErrFile to fd 2...
	I0729 16:47:22.669741    7567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:47:22.669867    7567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	W0729 16:47:22.669957    7567 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19346-7076/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19346-7076/.minikube/config/config.json: no such file or directory
	I0729 16:47:22.671282    7567 out.go:298] Setting JSON to true
	I0729 16:47:22.689727    7567 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4609,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:47:22.689814    7567 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:47:22.693917    7567 out.go:97] [download-only-017000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:47:22.694081    7567 notify.go:220] Checking for updates...
	W0729 16:47:22.694124    7567 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 16:47:22.698236    7567 out.go:169] MINIKUBE_LOCATION=19346
	I0729 16:47:22.699951    7567 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:47:22.704868    7567 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:47:22.708853    7567 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:47:22.715951    7567 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	W0729 16:47:22.722853    7567 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 16:47:22.723065    7567 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:47:22.726945    7567 out.go:97] Using the qemu2 driver based on user configuration
	I0729 16:47:22.726963    7567 start.go:297] selected driver: qemu2
	I0729 16:47:22.726976    7567 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:47:22.727032    7567 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:47:22.732040    7567 out.go:169] Automatically selected the socket_vmnet network
	I0729 16:47:22.737223    7567 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 16:47:22.737322    7567 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:47:22.737386    7567 cni.go:84] Creating CNI manager for ""
	I0729 16:47:22.737405    7567 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 16:47:22.737457    7567 start.go:340] cluster config:
	{Name:download-only-017000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:47:22.741352    7567 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:47:22.743355    7567 out.go:97] Downloading VM boot image ...
	I0729 16:47:22.743371    7567 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 16:47:27.725360    7567 out.go:97] Starting "download-only-017000" primary control-plane node in "download-only-017000" cluster
	I0729 16:47:27.725383    7567 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:47:27.786483    7567 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:47:27.786489    7567 cache.go:56] Caching tarball of preloaded images
	I0729 16:47:27.786635    7567 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:47:27.791172    7567 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 16:47:27.791179    7567 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:47:27.878373    7567 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:47:33.948214    7567 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:47:33.948372    7567 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:47:34.642812    7567 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 16:47:34.643013    7567 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/download-only-017000/config.json ...
	I0729 16:47:34.643033    7567 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/download-only-017000/config.json: {Name:mk2e750136eef84cd0c3e61bd45afe4021d8b7f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:47:34.643261    7567 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:47:34.644135    7567 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 16:47:35.008983    7567 out.go:169] 
	W0729 16:47:35.015155    7567 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19346-7076/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108665a60 0x108665a60 0x108665a60 0x108665a60 0x108665a60 0x108665a60 0x108665a60] Decompressors:map[bz2:0x1400000eed0 gz:0x1400000eed8 tar:0x1400000ee80 tar.bz2:0x1400000ee90 tar.gz:0x1400000eea0 tar.xz:0x1400000eeb0 tar.zst:0x1400000eec0 tbz2:0x1400000ee90 tgz:0x1400000eea0 txz:0x1400000eeb0 tzst:0x1400000eec0 xz:0x1400000eee0 zip:0x1400000eef0 zst:0x1400000eee8] Getters:map[file:0x14001388560 http:0x140000b4370 https:0x140000b43c0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 16:47:35.015187    7567 out_reason.go:110] 
	W0729 16:47:35.022991    7567 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:47:35.026924    7567 out.go:169] 
	
	
	* The control-plane node download-only-017000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-017000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-017000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (14.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-107000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-107000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (14.053208375s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (14.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-107000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-107000: exit status 85 (79.485125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
	|         | -p download-only-017000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
	| delete  | -p download-only-017000        | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
	| start   | -o=json --download-only        | download-only-107000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
	|         | -p download-only-107000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:47:35
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:47:35.446109    7594 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:47:35.446238    7594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:47:35.446244    7594 out.go:304] Setting ErrFile to fd 2...
	I0729 16:47:35.446247    7594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:47:35.446381    7594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:47:35.447466    7594 out.go:298] Setting JSON to true
	I0729 16:47:35.463470    7594 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4622,"bootTime":1722292233,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:47:35.463543    7594 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:47:35.468503    7594 out.go:97] [download-only-107000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:47:35.468613    7594 notify.go:220] Checking for updates...
	I0729 16:47:35.472281    7594 out.go:169] MINIKUBE_LOCATION=19346
	I0729 16:47:35.475488    7594 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:47:35.479495    7594 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:47:35.481199    7594 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:47:35.484513    7594 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	W0729 16:47:35.490457    7594 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 16:47:35.490599    7594 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:47:35.493420    7594 out.go:97] Using the qemu2 driver based on user configuration
	I0729 16:47:35.493429    7594 start.go:297] selected driver: qemu2
	I0729 16:47:35.493432    7594 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:47:35.493485    7594 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:47:35.496482    7594 out.go:169] Automatically selected the socket_vmnet network
	I0729 16:47:35.501636    7594 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 16:47:35.501749    7594 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:47:35.501768    7594 cni.go:84] Creating CNI manager for ""
	I0729 16:47:35.501776    7594 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:47:35.501782    7594 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:47:35.501824    7594 start.go:340] cluster config:
	{Name:download-only-107000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-107000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:47:35.505419    7594 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:47:35.508442    7594 out.go:97] Starting "download-only-107000" primary control-plane node in "download-only-107000" cluster
	I0729 16:47:35.508451    7594 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:47:35.563988    7594 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:47:35.564004    7594 cache.go:56] Caching tarball of preloaded images
	I0729 16:47:35.564156    7594 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:47:35.569258    7594 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 16:47:35.569264    7594 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:47:35.656346    7594 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:47:41.617841    7594 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:47:41.618005    7594 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:47:42.160910    7594 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:47:42.161095    7594 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/download-only-107000/config.json ...
	I0729 16:47:42.161112    7594 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/download-only-107000/config.json: {Name:mk919bdddc81e2acb61bb5d7fa44d579b9ad82bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:47:42.161331    7594 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:47:42.161454    7594 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-107000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-107000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-107000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (15.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-330000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-330000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (15.008272s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (15.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-330000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-330000: exit status 85 (75.1665ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
	|         | -p download-only-017000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
	| delete  | -p download-only-017000             | download-only-017000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
	| start   | -o=json --download-only             | download-only-107000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
	|         | -p download-only-107000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
	| delete  | -p download-only-107000             | download-only-107000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT | 29 Jul 24 16:47 PDT |
	| start   | -o=json --download-only             | download-only-330000 | jenkins | v1.33.1 | 29 Jul 24 16:47 PDT |                     |
	|         | -p download-only-330000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:47:49
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:47:49.788709    7616 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:47:49.788825    7616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:47:49.788828    7616 out.go:304] Setting ErrFile to fd 2...
	I0729 16:47:49.788830    7616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:47:49.788959    7616 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:47:49.790048    7616 out.go:298] Setting JSON to true
	I0729 16:47:49.806101    7616 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4636,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:47:49.806167    7616 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:47:49.809709    7616 out.go:97] [download-only-330000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:47:49.809825    7616 notify.go:220] Checking for updates...
	I0729 16:47:49.812608    7616 out.go:169] MINIKUBE_LOCATION=19346
	I0729 16:47:49.816640    7616 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:47:49.819683    7616 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:47:49.822668    7616 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:47:49.825696    7616 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	W0729 16:47:49.831545    7616 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 16:47:49.831736    7616 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:47:49.834626    7616 out.go:97] Using the qemu2 driver based on user configuration
	I0729 16:47:49.834636    7616 start.go:297] selected driver: qemu2
	I0729 16:47:49.834638    7616 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:47:49.834692    7616 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:47:49.837603    7616 out.go:169] Automatically selected the socket_vmnet network
	I0729 16:47:49.841106    7616 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 16:47:49.841197    7616 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:47:49.841233    7616 cni.go:84] Creating CNI manager for ""
	I0729 16:47:49.841240    7616 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:47:49.841245    7616 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:47:49.841287    7616 start.go:340] cluster config:
	{Name:download-only-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-330000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:47:49.844541    7616 iso.go:125] acquiring lock: {Name:mk93ade8a72f7dade6be9f6632fea774c53d777b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:47:49.847700    7616 out.go:97] Starting "download-only-330000" primary control-plane node in "download-only-330000" cluster
	I0729 16:47:49.847708    7616 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:47:49.905784    7616 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 16:47:49.905815    7616 cache.go:56] Caching tarball of preloaded images
	I0729 16:47:49.905989    7616 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:47:49.911038    7616 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 16:47:49.911045    7616 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:47:49.987303    7616 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 16:47:54.850231    7616 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:47:54.850570    7616 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 16:47:55.369552    7616 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 16:47:55.369757    7616 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/download-only-330000/config.json ...
	I0729 16:47:55.369776    7616 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19346-7076/.minikube/profiles/download-only-330000/config.json: {Name:mkc720b965fe0493c429256c8d27a6de539b5e8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:47:55.370060    7616 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:47:55.370190    7616 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19346-7076/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-330000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-330000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-330000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.29s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-394000 --alsologtostderr --binary-mirror http://127.0.0.1:51052 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-394000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-394000
--- PASS: TestBinaryMirror (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-663000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-663000: exit status 85 (59.326541ms)

                                                
                                                
-- stdout --
	* Profile "addons-663000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-663000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-663000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-663000: exit status 85 (56.535875ms)

                                                
                                                
-- stdout --
	* Profile "addons-663000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-663000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.29s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19346
- KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2057342480/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
--- PASS: TestHyperKitDriverInstallOrUpdate (11.29s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 status: exit status 7 (30.769833ms)

                                                
                                                
-- stdout --
	nospam-030000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 status: exit status 7 (30.131541ms)

                                                
                                                
-- stdout --
	nospam-030000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 status: exit status 7 (28.96225ms)

                                                
                                                
-- stdout --
	nospam-030000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 pause: exit status 83 (40.120709ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-030000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-030000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 pause: exit status 83 (38.933542ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-030000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-030000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 pause: exit status 83 (37.828042ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-030000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-030000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 unpause: exit status 83 (39.738125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-030000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-030000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 unpause: exit status 83 (37.081ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-030000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-030000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 unpause: exit status 83 (39.56275ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-030000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-030000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (9.65s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 stop: (2.01719975s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 stop: (3.587430667s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-030000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-030000 stop: (4.045937291s)
--- PASS: TestErrorSpam/stop (9.65s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19346-7076/.minikube/files/etc/test/nested/copy/7565/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local693731115/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cache add minikube-local-cache-test:functional-905000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cache delete minikube-local-cache-test:functional-905000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-905000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 config get cpus: exit status 14 (29.482583ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 config get cpus: exit status 14 (36.199708ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-905000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-905000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (117.620459ms)

                                                
                                                
-- stdout --
	* [functional-905000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:49:37.503670    8081 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:49:37.503809    8081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:49:37.503813    8081 out.go:304] Setting ErrFile to fd 2...
	I0729 16:49:37.503815    8081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:49:37.503936    8081 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:49:37.504938    8081 out.go:298] Setting JSON to false
	I0729 16:49:37.520979    8081 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4744,"bootTime":1722292233,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:49:37.521053    8081 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:49:37.525776    8081 out.go:177] * [functional-905000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:49:37.532697    8081 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:49:37.532768    8081 notify.go:220] Checking for updates...
	I0729 16:49:37.539717    8081 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:49:37.543671    8081 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:49:37.546753    8081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:49:37.549773    8081 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:49:37.552710    8081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:49:37.555970    8081 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:49:37.556242    8081 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:49:37.560744    8081 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:49:37.567772    8081 start.go:297] selected driver: qemu2
	I0729 16:49:37.567779    8081 start.go:901] validating driver "qemu2" against &{Name:functional-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-905000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:49:37.567845    8081 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:49:37.574608    8081 out.go:177] 
	W0729 16:49:37.578788    8081 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 16:49:37.582742    8081 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-905000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-905000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-905000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.743584ms)

                                                
                                                
-- stdout --
	* [functional-905000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:49:37.382524    8077 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:49:37.382667    8077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:49:37.382670    8077 out.go:304] Setting ErrFile to fd 2...
	I0729 16:49:37.382673    8077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:49:37.382802    8077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19346-7076/.minikube/bin
	I0729 16:49:37.384224    8077 out.go:298] Setting JSON to false
	I0729 16:49:37.401202    8077 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4744,"bootTime":1722292233,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:49:37.401291    8077 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:49:37.405811    8077 out.go:177] * [functional-905000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0729 16:49:37.413757    8077 out.go:177]   - MINIKUBE_LOCATION=19346
	I0729 16:49:37.413792    8077 notify.go:220] Checking for updates...
	I0729 16:49:37.421730    8077 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	I0729 16:49:37.425714    8077 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:49:37.428718    8077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:49:37.431720    8077 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	I0729 16:49:37.434753    8077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:49:37.438004    8077 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:49:37.438276    8077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:49:37.442700    8077 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0729 16:49:37.449777    8077 start.go:297] selected driver: qemu2
	I0729 16:49:37.449783    8077 start.go:901] validating driver "qemu2" against &{Name:functional-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-905000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:49:37.449833    8077 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:49:37.456782    8077 out.go:177] 
	W0729 16:49:37.460676    8077 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 16:49:37.464746    8077 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "45.764792ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.961375ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "45.942208ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.759ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.814930166s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-905000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image rm docker.io/kicbase/echo-server:functional-905000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-905000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image save --daemon docker.io/kicbase/echo-server:functional-905000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-905000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013697125s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-905000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-905000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-905000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.14s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-832000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-832000 --output=json --user=testUser: (3.143236292s)
--- PASS: TestJSONOutput/stop/Command (3.14s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-743000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-743000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.824125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"68722e61-983f-407d-95e0-1fe57c2fa484","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-743000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4bd2ff6-e148-4804-929e-2f5639f342ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19346"}}
	{"specversion":"1.0","id":"08d686e2-e861-4109-a577-f01e5733ac1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig"}}
	{"specversion":"1.0","id":"9a454ac9-d140-4772-a8f6-ac71c1130651","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c9854b62-0614-41f5-a43d-1f41b8cd26cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"33336c41-3cab-4136-94b2-7baa6d7be5e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube"}}
	{"specversion":"1.0","id":"ae00f545-d680-4e53-9cad-8ccc6f385869","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ed36cd35-cf31-4e19-a7b6-94284ea0d6b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-743000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-743000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-208000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-757000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-757000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.74925ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-757000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19346
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19346-7076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19346-7076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-757000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-757000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (35.924166ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-757000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-757000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.661238792s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-757000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-757000: (3.200328667s)
--- PASS: TestNoKubernetes/serial/Stop (3.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-757000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-757000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.431709ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-757000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-757000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-813000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-813000 --alsologtostderr -v=3: (3.418249708s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-813000 -n old-k8s-version-813000: exit status 7 (59.174291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-813000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-906000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-906000 --alsologtostderr -v=3: (3.789349792s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-906000 -n no-preload-906000: exit status 7 (60.569333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-906000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-479000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-479000 --alsologtostderr -v=3: (1.978728042s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-479000 -n embed-certs-479000: exit status 7 (56.075375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-479000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-294000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-294000 --alsologtostderr -v=3: (3.205831958s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (53.402125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-294000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-051000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-051000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-051000 --alsologtostderr -v=3: (3.107690417s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-051000 -n newest-cni-051000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-051000 -n newest-cni-051000: exit status 7 (58.233709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-051000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port393353720/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722296943328143000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port393353720/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722296943328143000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port393353720/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722296943328143000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port393353720/001/test-1722296943328143000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (57.526041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.783417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.176458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.870875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.44775ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.45375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.750792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "sudo umount -f /mount-9p": exit status 83 (46.80425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-905000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port393353720/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (10.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port540893842/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.820125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.403458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.349791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.642625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.669542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.845458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.632666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "sudo umount -f /mount-9p": exit status 83 (50.65975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-905000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port540893842/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (13.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2645699080/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2645699080/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2645699080/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1: exit status 83 (78.079125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1: exit status 83 (84.644125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1: exit status 83 (87.143417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1: exit status 83 (85.871041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1: exit status 83 (85.778917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1: exit status 83 (84.925584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1: exit status 83 (88.280375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-905000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2645699080/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2645699080/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2645699080/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (9.46s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-561000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-561000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-561000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-561000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561000"

                                                
                                                
----------------------- debugLogs end: cilium-561000 [took: 2.1824s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-561000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-561000
--- SKIP: TestNetworkPlugins/group/cilium (2.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-464000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-464000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard