Test Report: QEMU_macOS 19338

                    
                      0eb0b855c9cd12df3081fe3f67aa770440dcda12:2024-07-29:35550
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.57
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.05
36 TestAddons/Setup 10.35
37 TestCertOptions 10.11
38 TestCertExpiration 196.33
39 TestDockerFlags 12.22
40 TestForceSystemdFlag 10.6
41 TestForceSystemdEnv 9.99
47 TestErrorSpam/setup 9.81
56 TestFunctional/serial/StartWithProxy 10.09
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
70 TestFunctional/serial/MinikubeKubectlCmd 0.74
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.97
72 TestFunctional/serial/ExtraConfig 5.29
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.17
86 TestFunctional/parallel/ServiceCmdConnect 0.13
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.25
93 TestFunctional/parallel/FileSync 0.07
94 TestFunctional/parallel/CertSync 0.27
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
111 TestFunctional/parallel/DockerEnv/bash 0.04
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 82.43
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.3
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.11
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
126 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
128 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
129 TestFunctional/parallel/ServiceCmd/List 0.04
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
132 TestFunctional/parallel/ServiceCmd/Format 0.04
133 TestFunctional/parallel/ServiceCmd/URL 0.04
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 37.91
150 TestMultiControlPlane/serial/StartCluster 9.94
151 TestMultiControlPlane/serial/DeployApp 76.7
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
159 TestMultiControlPlane/serial/RestartSecondaryNode 44.13
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.42
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
164 TestMultiControlPlane/serial/StopCluster 2.97
165 TestMultiControlPlane/serial/RestartCluster 5.25
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
171 TestImageBuild/serial/Setup 10.01
174 TestJSONOutput/start/Command 9.77
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.13
206 TestMountStart/serial/StartWithMountFirst 9.91
209 TestMultiNode/serial/FreshStart2Nodes 10.07
210 TestMultiNode/serial/DeployApp2Nodes 81.67
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.07
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 58.43
218 TestMultiNode/serial/RestartKeepsNodes 8.81
219 TestMultiNode/serial/DeleteNode 0.1
220 TestMultiNode/serial/StopMultiNode 3.25
221 TestMultiNode/serial/RestartMultiNode 5.26
222 TestMultiNode/serial/ValidateNameConflict 20.57
226 TestPreload 9.97
228 TestScheduledStopUnix 10.11
229 TestSkaffold 12.93
232 TestRunningBinaryUpgrade 635.24
234 TestKubernetesUpgrade 17.53
248 TestStoppedBinaryUpgrade/Upgrade 591.08
258 TestPause/serial/Start 9.88
261 TestNoKubernetes/serial/StartWithK8s 9.98
262 TestNoKubernetes/serial/StartWithStopK8s 7.44
263 TestNoKubernetes/serial/Start 7.54
264 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.74
265 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.57
269 TestNoKubernetes/serial/StartNoArgs 5.35
271 TestNetworkPlugins/group/auto/Start 9.8
272 TestNetworkPlugins/group/kindnet/Start 9.84
273 TestNetworkPlugins/group/flannel/Start 9.86
274 TestNetworkPlugins/group/enable-default-cni/Start 9.79
275 TestNetworkPlugins/group/bridge/Start 9.89
276 TestNetworkPlugins/group/kubenet/Start 9.91
277 TestNetworkPlugins/group/custom-flannel/Start 9.82
278 TestNetworkPlugins/group/calico/Start 9.77
279 TestNetworkPlugins/group/false/Start 9.81
281 TestStartStop/group/old-k8s-version/serial/FirstStart 9.94
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.1
292 TestStartStop/group/no-preload/serial/FirstStart 10.04
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
297 TestStartStop/group/no-preload/serial/SecondStart 5.25
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 10.07
304 TestStartStop/group/embed-certs/serial/DeployApp 0.09
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
308 TestStartStop/group/embed-certs/serial/SecondStart 7.03
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.95
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
312 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
314 TestStartStop/group/embed-certs/serial/Pause 0.1
316 TestStartStop/group/newest-cni/serial/FirstStart 9.92
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
326 TestStartStop/group/newest-cni/serial/SecondStart 5.26
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
330 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (11.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-008000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-008000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.56705625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eea06e2a-0b5f-4d7f-a89e-0654fd0a9bd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-008000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b92388b-71f9-42f8-b5b9-60cbe2b766e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19338"}}
	{"specversion":"1.0","id":"572368ac-e1f7-4b84-84e8-90bd5beaea44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig"}}
	{"specversion":"1.0","id":"af99f854-8da2-41f0-8035-f68a4e96b832","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"65ecbfee-73c7-4bff-93cd-ee43f01bee58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b044fe18-c510-4a51-b5a1-aff3a4239a8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube"}}
	{"specversion":"1.0","id":"00a3fc4b-f3bb-46bf-8d4d-55eeb82e45ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"8928f0d4-064d-4bd8-b119-7858f9b1d21c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e336a70f-8e30-40bf-9c1b-62346ef98c87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"93a9f528-f66c-4df8-a666-77e69bdf5c95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6485b8bc-1509-4679-baae-e7995711db97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-008000\" primary control-plane node in \"download-only-008000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"15e241a9-0ae7-42e2-889f-ab880be49df7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"dc5b40d1-ad3e-43d3-bdb6-656230639432","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104ffda60 0x104ffda60 0x104ffda60 0x104ffda60 0x104ffda60 0x104ffda60 0x104ffda60] Decompressors:map[bz2:0x140007d1470 gz:0x140007d1478 tar:0x140007d1420 tar.bz2:0x140007d1430 tar.gz:0x140007d1440 tar.xz:0x140007d1450 tar.zst:0x140007d1460 tbz2:0x140007d1430 tgz:0x1
40007d1440 txz:0x140007d1450 tzst:0x140007d1460 xz:0x140007d1480 zip:0x140007d1490 zst:0x140007d1488] Getters:map[file:0x1400069a0c0 http:0x14000b24320 https:0x14000b24370] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"1951dc73-968c-42e9-a9c9-5bf120b61278","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:42:49.260854   21510 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:42:49.261004   21510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:49.261008   21510 out.go:304] Setting ErrFile to fd 2...
	I0729 04:42:49.261010   21510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:49.261147   21510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	W0729 04:42:49.261231   21510 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19338-21024/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19338-21024/.minikube/config/config.json: no such file or directory
	I0729 04:42:49.262550   21510 out.go:298] Setting JSON to true
	I0729 04:42:49.279423   21510 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9738,"bootTime":1722243631,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:42:49.279549   21510 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:42:49.284816   21510 out.go:97] [download-only-008000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:42:49.284954   21510 notify.go:220] Checking for updates...
	W0729 04:42:49.285047   21510 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 04:42:49.290390   21510 out.go:169] MINIKUBE_LOCATION=19338
	I0729 04:42:49.293860   21510 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:42:49.298155   21510 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:42:49.304061   21510 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:42:49.307778   21510 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	W0729 04:42:49.314330   21510 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 04:42:49.314521   21510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:42:49.318267   21510 out.go:97] Using the qemu2 driver based on user configuration
	I0729 04:42:49.318285   21510 start.go:297] selected driver: qemu2
	I0729 04:42:49.318308   21510 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:42:49.318366   21510 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:42:49.321994   21510 out.go:169] Automatically selected the socket_vmnet network
	I0729 04:42:49.327319   21510 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 04:42:49.327412   21510 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:42:49.327439   21510 cni.go:84] Creating CNI manager for ""
	I0729 04:42:49.327456   21510 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 04:42:49.327503   21510 start.go:340] cluster config:
	{Name:download-only-008000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-008000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:42:49.331552   21510 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:42:49.335837   21510 out.go:97] Downloading VM boot image ...
	I0729 04:42:49.335857   21510 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 04:42:54.126274   21510 out.go:97] Starting "download-only-008000" primary control-plane node in "download-only-008000" cluster
	I0729 04:42:54.126300   21510 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:42:54.181104   21510 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:42:54.181113   21510 cache.go:56] Caching tarball of preloaded images
	I0729 04:42:54.181258   21510 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:42:54.185913   21510 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 04:42:54.185919   21510 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:42:54.267924   21510 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:42:59.536462   21510 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:42:59.536631   21510 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:43:00.231838   21510 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 04:43:00.232047   21510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/download-only-008000/config.json ...
	I0729 04:43:00.232066   21510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/download-only-008000/config.json: {Name:mk8824f391d26486e3a1ec3bdb264ebdb1b0c69b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:43:00.233133   21510 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:43:00.233465   21510 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 04:43:00.750273   21510 out.go:169] 
	W0729 04:43:00.755253   21510 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104ffda60 0x104ffda60 0x104ffda60 0x104ffda60 0x104ffda60 0x104ffda60 0x104ffda60] Decompressors:map[bz2:0x140007d1470 gz:0x140007d1478 tar:0x140007d1420 tar.bz2:0x140007d1430 tar.gz:0x140007d1440 tar.xz:0x140007d1450 tar.zst:0x140007d1460 tbz2:0x140007d1430 tgz:0x140007d1440 txz:0x140007d1450 tzst:0x140007d1460 xz:0x140007d1480 zip:0x140007d1490 zst:0x140007d1488] Getters:map[file:0x1400069a0c0 http:0x14000b24320 https:0x14000b24370] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 04:43:00.755280   21510 out_reason.go:110] 
	W0729 04:43:00.762307   21510 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:43:00.766185   21510 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-008000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-754000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-754000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.885567167s)

                                                
                                                
-- stdout --
	* [offline-docker-754000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-754000" primary control-plane node in "offline-docker-754000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-754000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:53:30.407470   22929 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:53:30.407610   22929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:53:30.407614   22929 out.go:304] Setting ErrFile to fd 2...
	I0729 04:53:30.407616   22929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:53:30.407763   22929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:53:30.409088   22929 out.go:298] Setting JSON to false
	I0729 04:53:30.426845   22929 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10379,"bootTime":1722243631,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:53:30.426984   22929 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:53:30.432476   22929 out.go:177] * [offline-docker-754000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:53:30.439535   22929 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:53:30.439566   22929 notify.go:220] Checking for updates...
	I0729 04:53:30.445487   22929 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:53:30.448542   22929 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:53:30.451493   22929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:53:30.454507   22929 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:53:30.457406   22929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:53:30.460870   22929 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:53:30.460932   22929 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:53:30.465438   22929 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:53:30.472511   22929 start.go:297] selected driver: qemu2
	I0729 04:53:30.472523   22929 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:53:30.472529   22929 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:53:30.474516   22929 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:53:30.477464   22929 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:53:30.478708   22929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:53:30.478722   22929 cni.go:84] Creating CNI manager for ""
	I0729 04:53:30.478732   22929 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:53:30.478735   22929 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:53:30.478773   22929 start.go:340] cluster config:
	{Name:offline-docker-754000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-754000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:53:30.482258   22929 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:53:30.489491   22929 out.go:177] * Starting "offline-docker-754000" primary control-plane node in "offline-docker-754000" cluster
	I0729 04:53:30.497377   22929 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:53:30.497411   22929 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:53:30.497422   22929 cache.go:56] Caching tarball of preloaded images
	I0729 04:53:30.497497   22929 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:53:30.497503   22929 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:53:30.497586   22929 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/offline-docker-754000/config.json ...
	I0729 04:53:30.497596   22929 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/offline-docker-754000/config.json: {Name:mkbea96f2412640d862cc50c79cb8e8785c0d098 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:53:30.497999   22929 start.go:360] acquireMachinesLock for offline-docker-754000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:53:30.498034   22929 start.go:364] duration metric: took 26.709µs to acquireMachinesLock for "offline-docker-754000"
	I0729 04:53:30.498047   22929 start.go:93] Provisioning new machine with config: &{Name:offline-docker-754000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-754000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:53:30.498082   22929 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:53:30.506400   22929 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:53:30.522247   22929 start.go:159] libmachine.API.Create for "offline-docker-754000" (driver="qemu2")
	I0729 04:53:30.522280   22929 client.go:168] LocalClient.Create starting
	I0729 04:53:30.522382   22929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 04:53:30.522413   22929 main.go:141] libmachine: Decoding PEM data...
	I0729 04:53:30.522426   22929 main.go:141] libmachine: Parsing certificate...
	I0729 04:53:30.522472   22929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 04:53:30.522494   22929 main.go:141] libmachine: Decoding PEM data...
	I0729 04:53:30.522504   22929 main.go:141] libmachine: Parsing certificate...
	I0729 04:53:30.522982   22929 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:53:30.674260   22929 main.go:141] libmachine: Creating SSH key...
	I0729 04:53:30.852998   22929 main.go:141] libmachine: Creating Disk image...
	I0729 04:53:30.853006   22929 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:53:30.853170   22929 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/disk.qcow2
	I0729 04:53:30.863956   22929 main.go:141] libmachine: STDOUT: 
	I0729 04:53:30.863994   22929 main.go:141] libmachine: STDERR: 
	I0729 04:53:30.864070   22929 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/disk.qcow2 +20000M
	I0729 04:53:30.872425   22929 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:53:30.872449   22929 main.go:141] libmachine: STDERR: 
	I0729 04:53:30.872467   22929 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/disk.qcow2
	I0729 04:53:30.872472   22929 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:53:30.872486   22929 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:53:30.872517   22929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:8f:4a:4d:90:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/disk.qcow2
	I0729 04:53:30.874397   22929 main.go:141] libmachine: STDOUT: 
	I0729 04:53:30.874418   22929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:53:30.874439   22929 client.go:171] duration metric: took 352.15625ms to LocalClient.Create
	I0729 04:53:32.876478   22929 start.go:128] duration metric: took 2.378443458s to createHost
	I0729 04:53:32.876514   22929 start.go:83] releasing machines lock for "offline-docker-754000", held for 2.3785295s
	W0729 04:53:32.876539   22929 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:53:32.888879   22929 out.go:177] * Deleting "offline-docker-754000" in qemu2 ...
	W0729 04:53:32.901950   22929 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:53:32.901960   22929 start.go:729] Will try again in 5 seconds ...
	I0729 04:53:37.904101   22929 start.go:360] acquireMachinesLock for offline-docker-754000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:53:37.904589   22929 start.go:364] duration metric: took 354.041µs to acquireMachinesLock for "offline-docker-754000"
	I0729 04:53:37.904725   22929 start.go:93] Provisioning new machine with config: &{Name:offline-docker-754000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-754000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:53:37.905092   22929 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:53:37.923647   22929 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:53:37.973746   22929 start.go:159] libmachine.API.Create for "offline-docker-754000" (driver="qemu2")
	I0729 04:53:37.973792   22929 client.go:168] LocalClient.Create starting
	I0729 04:53:37.973910   22929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 04:53:37.973983   22929 main.go:141] libmachine: Decoding PEM data...
	I0729 04:53:37.974002   22929 main.go:141] libmachine: Parsing certificate...
	I0729 04:53:37.974067   22929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 04:53:37.974111   22929 main.go:141] libmachine: Decoding PEM data...
	I0729 04:53:37.974128   22929 main.go:141] libmachine: Parsing certificate...
	I0729 04:53:37.974625   22929 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:53:38.134756   22929 main.go:141] libmachine: Creating SSH key...
	I0729 04:53:38.198263   22929 main.go:141] libmachine: Creating Disk image...
	I0729 04:53:38.198269   22929 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:53:38.198453   22929 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/disk.qcow2
	I0729 04:53:38.207482   22929 main.go:141] libmachine: STDOUT: 
	I0729 04:53:38.207503   22929 main.go:141] libmachine: STDERR: 
	I0729 04:53:38.207550   22929 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/disk.qcow2 +20000M
	I0729 04:53:38.215395   22929 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:53:38.215411   22929 main.go:141] libmachine: STDERR: 
	I0729 04:53:38.215425   22929 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/disk.qcow2
	I0729 04:53:38.215428   22929 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:53:38.215440   22929 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:53:38.215469   22929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:d9:c6:fb:30:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/offline-docker-754000/disk.qcow2
	I0729 04:53:38.216988   22929 main.go:141] libmachine: STDOUT: 
	I0729 04:53:38.217010   22929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:53:38.217021   22929 client.go:171] duration metric: took 243.230833ms to LocalClient.Create
	I0729 04:53:40.219162   22929 start.go:128] duration metric: took 2.314086792s to createHost
	I0729 04:53:40.219235   22929 start.go:83] releasing machines lock for "offline-docker-754000", held for 2.314664792s
	W0729 04:53:40.219541   22929 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-754000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-754000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:53:40.230171   22929 out.go:177] 
	W0729 04:53:40.234325   22929 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:53:40.234362   22929 out.go:239] * 
	* 
	W0729 04:53:40.236862   22929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:53:40.248006   22929 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-754000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-29 04:53:40.265235 -0700 PDT m=+651.103480376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-754000 -n offline-docker-754000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-754000 -n offline-docker-754000: exit status 7 (63.335125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-754000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-754000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-754000
--- FAIL: TestOffline (10.05s)

                                                
                                    
x
+
TestAddons/Setup (10.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-338000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-338000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.350777041s)

                                                
                                                
-- stdout --
	* [addons-338000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-338000" primary control-plane node in "addons-338000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-338000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:43:15.148396   21613 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:43:15.148508   21613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:43:15.148511   21613 out.go:304] Setting ErrFile to fd 2...
	I0729 04:43:15.148513   21613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:43:15.148676   21613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:43:15.149796   21613 out.go:298] Setting JSON to false
	I0729 04:43:15.165926   21613 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9764,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:43:15.165988   21613 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:43:15.171254   21613 out.go:177] * [addons-338000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:43:15.176241   21613 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:43:15.176298   21613 notify.go:220] Checking for updates...
	I0729 04:43:15.183143   21613 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:43:15.186176   21613 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:43:15.189228   21613 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:43:15.190683   21613 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:43:15.194149   21613 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:43:15.197337   21613 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:43:15.200985   21613 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:43:15.208172   21613 start.go:297] selected driver: qemu2
	I0729 04:43:15.208179   21613 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:43:15.208189   21613 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:43:15.210458   21613 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:43:15.214053   21613 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:43:15.217257   21613 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:43:15.217302   21613 cni.go:84] Creating CNI manager for ""
	I0729 04:43:15.217310   21613 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:43:15.217314   21613 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:43:15.217343   21613 start.go:340] cluster config:
	{Name:addons-338000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:43:15.221198   21613 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:43:15.229202   21613 out.go:177] * Starting "addons-338000" primary control-plane node in "addons-338000" cluster
	I0729 04:43:15.233185   21613 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:43:15.233200   21613 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:43:15.233210   21613 cache.go:56] Caching tarball of preloaded images
	I0729 04:43:15.233266   21613 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:43:15.233272   21613 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:43:15.233490   21613 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/addons-338000/config.json ...
	I0729 04:43:15.233501   21613 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/addons-338000/config.json: {Name:mkc43281d653cd981a7f30765b79d8b182b6dae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:43:15.233915   21613 start.go:360] acquireMachinesLock for addons-338000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:43:15.233984   21613 start.go:364] duration metric: took 63.834µs to acquireMachinesLock for "addons-338000"
	I0729 04:43:15.233995   21613 start.go:93] Provisioning new machine with config: &{Name:addons-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:43:15.234019   21613 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:43:15.238284   21613 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 04:43:15.257369   21613 start.go:159] libmachine.API.Create for "addons-338000" (driver="qemu2")
	I0729 04:43:15.257390   21613 client.go:168] LocalClient.Create starting
	I0729 04:43:15.257542   21613 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 04:43:15.376268   21613 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 04:43:15.501871   21613 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:43:15.725482   21613 main.go:141] libmachine: Creating SSH key...
	I0729 04:43:15.905761   21613 main.go:141] libmachine: Creating Disk image...
	I0729 04:43:15.905774   21613 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:43:15.905996   21613 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/disk.qcow2
	I0729 04:43:15.915426   21613 main.go:141] libmachine: STDOUT: 
	I0729 04:43:15.915451   21613 main.go:141] libmachine: STDERR: 
	I0729 04:43:15.915494   21613 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/disk.qcow2 +20000M
	I0729 04:43:15.923344   21613 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:43:15.923357   21613 main.go:141] libmachine: STDERR: 
	I0729 04:43:15.923371   21613 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/disk.qcow2
	I0729 04:43:15.923379   21613 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:43:15.923408   21613 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:43:15.923441   21613 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:cf:2a:44:51:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/disk.qcow2
	I0729 04:43:15.925068   21613 main.go:141] libmachine: STDOUT: 
	I0729 04:43:15.925086   21613 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:43:15.925104   21613 client.go:171] duration metric: took 667.726709ms to LocalClient.Create
	I0729 04:43:17.927369   21613 start.go:128] duration metric: took 2.693354083s to createHost
	I0729 04:43:17.927474   21613 start.go:83] releasing machines lock for "addons-338000", held for 2.693543292s
	W0729 04:43:17.927527   21613 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:43:17.937696   21613 out.go:177] * Deleting "addons-338000" in qemu2 ...
	W0729 04:43:17.968143   21613 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:43:17.968167   21613 start.go:729] Will try again in 5 seconds ...
	I0729 04:43:22.969472   21613 start.go:360] acquireMachinesLock for addons-338000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:43:22.970055   21613 start.go:364] duration metric: took 479.042µs to acquireMachinesLock for "addons-338000"
	I0729 04:43:22.970204   21613 start.go:93] Provisioning new machine with config: &{Name:addons-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:43:22.970518   21613 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:43:22.983962   21613 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 04:43:23.035692   21613 start.go:159] libmachine.API.Create for "addons-338000" (driver="qemu2")
	I0729 04:43:23.035730   21613 client.go:168] LocalClient.Create starting
	I0729 04:43:23.035863   21613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 04:43:23.035925   21613 main.go:141] libmachine: Decoding PEM data...
	I0729 04:43:23.035954   21613 main.go:141] libmachine: Parsing certificate...
	I0729 04:43:23.036056   21613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 04:43:23.036103   21613 main.go:141] libmachine: Decoding PEM data...
	I0729 04:43:23.036115   21613 main.go:141] libmachine: Parsing certificate...
	I0729 04:43:23.036682   21613 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:43:23.215779   21613 main.go:141] libmachine: Creating SSH key...
	I0729 04:43:23.412375   21613 main.go:141] libmachine: Creating Disk image...
	I0729 04:43:23.412382   21613 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:43:23.412577   21613 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/disk.qcow2
	I0729 04:43:23.422483   21613 main.go:141] libmachine: STDOUT: 
	I0729 04:43:23.422585   21613 main.go:141] libmachine: STDERR: 
	I0729 04:43:23.422643   21613 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/disk.qcow2 +20000M
	I0729 04:43:23.430706   21613 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:43:23.430722   21613 main.go:141] libmachine: STDERR: 
	I0729 04:43:23.430737   21613 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/disk.qcow2
	I0729 04:43:23.430746   21613 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:43:23.430757   21613 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:43:23.430787   21613 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:63:90:21:e7:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/addons-338000/disk.qcow2
	I0729 04:43:23.432460   21613 main.go:141] libmachine: STDOUT: 
	I0729 04:43:23.432475   21613 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:43:23.432492   21613 client.go:171] duration metric: took 396.766625ms to LocalClient.Create
	I0729 04:43:25.434700   21613 start.go:128] duration metric: took 2.464195542s to createHost
	I0729 04:43:25.434782   21613 start.go:83] releasing machines lock for "addons-338000", held for 2.464755666s
	W0729 04:43:25.435151   21613 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-338000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-338000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:43:25.443705   21613 out.go:177] 
	W0729 04:43:25.446723   21613 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:43:25.446753   21613 out.go:239] * 
	* 
	W0729 04:43:25.449262   21613 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:43:25.455612   21613 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-338000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.35s)

                                                
                                    
x
+
TestCertOptions (10.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-467000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-467000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.847328s)

                                                
                                                
-- stdout --
	* [cert-options-467000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-467000" primary control-plane node in "cert-options-467000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-467000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-467000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-467000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-467000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-467000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.101333ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-467000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-467000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-467000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-467000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-467000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-467000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.64175ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-467000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-467000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-467000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-467000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-467000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-29 05:05:20.713715 -0700 PDT m=+1351.544489501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-467000 -n cert-options-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-467000 -n cert-options-467000: exit status 7 (29.273167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-467000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-467000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-467000
--- FAIL: TestCertOptions (10.11s)

                                                
                                    
x
+
TestCertExpiration (196.33s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-512000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-512000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.936296834s)

                                                
                                                
-- stdout --
	* [cert-expiration-512000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-512000" primary control-plane node in "cert-expiration-512000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-512000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-512000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-512000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-512000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-512000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.233489292s)

                                                
                                                
-- stdout --
	* [cert-expiration-512000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-512000" primary control-plane node in "cert-expiration-512000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-512000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-512000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-512000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-512000" primary control-plane node in "cert-expiration-512000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-512000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-29 05:08:05.956774 -0700 PDT m=+1516.790596876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-512000 -n cert-expiration-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-512000 -n cert-expiration-512000: exit status 7 (68.261459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-512000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-512000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-512000
--- FAIL: TestCertExpiration (196.33s)

                                                
                                    
x
+
TestDockerFlags (12.22s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-147000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-147000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.976448125s)

                                                
                                                
-- stdout --
	* [docker-flags-147000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-147000" primary control-plane node in "docker-flags-147000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-147000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:04:58.520351   23741 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:04:58.520497   23741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:04:58.520503   23741 out.go:304] Setting ErrFile to fd 2...
	I0729 05:04:58.520505   23741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:04:58.520647   23741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:04:58.521948   23741 out.go:298] Setting JSON to false
	I0729 05:04:58.539096   23741 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11067,"bootTime":1722243631,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:04:58.539158   23741 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:04:58.545949   23741 out.go:177] * [docker-flags-147000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:04:58.556926   23741 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:04:58.556970   23741 notify.go:220] Checking for updates...
	I0729 05:04:58.566891   23741 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:04:58.570926   23741 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:04:58.573842   23741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:04:58.576932   23741 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:04:58.579891   23741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:04:58.583204   23741 config.go:182] Loaded profile config "cert-expiration-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:04:58.583272   23741 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:04:58.583314   23741 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:04:58.586872   23741 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:04:58.593855   23741 start.go:297] selected driver: qemu2
	I0729 05:04:58.593860   23741 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:04:58.593865   23741 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:04:58.595959   23741 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:04:58.598873   23741 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:04:58.602937   23741 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0729 05:04:58.602954   23741 cni.go:84] Creating CNI manager for ""
	I0729 05:04:58.602960   23741 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:04:58.602963   23741 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 05:04:58.602987   23741 start.go:340] cluster config:
	{Name:docker-flags-147000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-147000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:04:58.606378   23741 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:04:58.614902   23741 out.go:177] * Starting "docker-flags-147000" primary control-plane node in "docker-flags-147000" cluster
	I0729 05:04:58.617858   23741 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:04:58.617870   23741 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:04:58.617880   23741 cache.go:56] Caching tarball of preloaded images
	I0729 05:04:58.617927   23741 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:04:58.617932   23741 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:04:58.617991   23741 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/docker-flags-147000/config.json ...
	I0729 05:04:58.618000   23741 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/docker-flags-147000/config.json: {Name:mk81915b878eb3532b6a3ea7458588e330ffcc6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:04:58.618279   23741 start.go:360] acquireMachinesLock for docker-flags-147000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:05:00.683518   23741 start.go:364] duration metric: took 2.065240458s to acquireMachinesLock for "docker-flags-147000"
	I0729 05:05:00.683712   23741 start.go:93] Provisioning new machine with config: &{Name:docker-flags-147000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-147000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:05:00.683953   23741 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:05:00.693222   23741 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 05:05:00.742938   23741 start.go:159] libmachine.API.Create for "docker-flags-147000" (driver="qemu2")
	I0729 05:05:00.742989   23741 client.go:168] LocalClient.Create starting
	I0729 05:05:00.743146   23741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:05:00.743200   23741 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:00.743227   23741 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:00.743294   23741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:05:00.743338   23741 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:00.743350   23741 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:00.743992   23741 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:05:00.915582   23741 main.go:141] libmachine: Creating SSH key...
	I0729 05:05:01.040307   23741 main.go:141] libmachine: Creating Disk image...
	I0729 05:05:01.040312   23741 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:05:01.040529   23741 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/disk.qcow2
	I0729 05:05:01.049845   23741 main.go:141] libmachine: STDOUT: 
	I0729 05:05:01.049865   23741 main.go:141] libmachine: STDERR: 
	I0729 05:05:01.049923   23741 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/disk.qcow2 +20000M
	I0729 05:05:01.057784   23741 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:05:01.057796   23741 main.go:141] libmachine: STDERR: 
	I0729 05:05:01.057807   23741 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/disk.qcow2
	I0729 05:05:01.057812   23741 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:05:01.057825   23741 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:05:01.057857   23741 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:85:22:d3:55:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/disk.qcow2
	I0729 05:05:01.059481   23741 main.go:141] libmachine: STDOUT: 
	I0729 05:05:01.059493   23741 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:05:01.059512   23741 client.go:171] duration metric: took 316.522208ms to LocalClient.Create
	I0729 05:05:03.061715   23741 start.go:128] duration metric: took 2.377776s to createHost
	I0729 05:05:03.061765   23741 start.go:83] releasing machines lock for "docker-flags-147000", held for 2.378244709s
	W0729 05:05:03.061817   23741 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:03.078077   23741 out.go:177] * Deleting "docker-flags-147000" in qemu2 ...
	W0729 05:05:03.106700   23741 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:03.106741   23741 start.go:729] Will try again in 5 seconds ...
	I0729 05:05:08.108902   23741 start.go:360] acquireMachinesLock for docker-flags-147000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:05:08.109298   23741 start.go:364] duration metric: took 323.167µs to acquireMachinesLock for "docker-flags-147000"
	I0729 05:05:08.109410   23741 start.go:93] Provisioning new machine with config: &{Name:docker-flags-147000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-147000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:05:08.109655   23741 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:05:08.120360   23741 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 05:05:08.169453   23741 start.go:159] libmachine.API.Create for "docker-flags-147000" (driver="qemu2")
	I0729 05:05:08.169502   23741 client.go:168] LocalClient.Create starting
	I0729 05:05:08.169612   23741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:05:08.169685   23741 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:08.169706   23741 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:08.169765   23741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:05:08.169809   23741 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:08.169820   23741 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:08.171986   23741 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:05:08.332990   23741 main.go:141] libmachine: Creating SSH key...
	I0729 05:05:08.403948   23741 main.go:141] libmachine: Creating Disk image...
	I0729 05:05:08.403953   23741 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:05:08.404170   23741 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/disk.qcow2
	I0729 05:05:08.413306   23741 main.go:141] libmachine: STDOUT: 
	I0729 05:05:08.413323   23741 main.go:141] libmachine: STDERR: 
	I0729 05:05:08.413384   23741 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/disk.qcow2 +20000M
	I0729 05:05:08.421320   23741 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:05:08.421335   23741 main.go:141] libmachine: STDERR: 
	I0729 05:05:08.421344   23741 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/disk.qcow2
	I0729 05:05:08.421347   23741 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:05:08.421357   23741 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:05:08.421382   23741 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:39:1d:b8:6b:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/docker-flags-147000/disk.qcow2
	I0729 05:05:08.423050   23741 main.go:141] libmachine: STDOUT: 
	I0729 05:05:08.423064   23741 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:05:08.423076   23741 client.go:171] duration metric: took 253.571208ms to LocalClient.Create
	I0729 05:05:10.425307   23741 start.go:128] duration metric: took 2.315637417s to createHost
	I0729 05:05:10.425395   23741 start.go:83] releasing machines lock for "docker-flags-147000", held for 2.316112333s
	W0729 05:05:10.425820   23741 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-147000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-147000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:10.435347   23741 out.go:177] 
	W0729 05:05:10.441453   23741 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:05:10.441490   23741 out.go:239] * 
	* 
	W0729 05:05:10.444101   23741 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:05:10.453337   23741 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-147000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-147000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-147000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (82.954583ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-147000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-147000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-147000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-147000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-147000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-147000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-147000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-147000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-147000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (49.564209ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-147000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-147000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-147000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-147000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-147000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-147000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-29 05:05:10.601407 -0700 PDT m=+1341.431994459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-147000 -n docker-flags-147000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-147000 -n docker-flags-147000: exit status 7 (31.353083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-147000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-147000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-147000
--- FAIL: TestDockerFlags (12.22s)

                                                
                                    
x
+
TestForceSystemdFlag (10.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-232000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-232000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.403952292s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-232000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-232000" primary control-plane node in "force-systemd-flag-232000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-232000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:04:25.493852   23599 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:04:25.494006   23599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:04:25.494009   23599 out.go:304] Setting ErrFile to fd 2...
	I0729 05:04:25.494012   23599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:04:25.494152   23599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:04:25.495268   23599 out.go:298] Setting JSON to false
	I0729 05:04:25.511533   23599 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11034,"bootTime":1722243631,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:04:25.511602   23599 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:04:25.516078   23599 out.go:177] * [force-systemd-flag-232000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:04:25.524197   23599 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:04:25.524255   23599 notify.go:220] Checking for updates...
	I0729 05:04:25.531167   23599 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:04:25.534130   23599 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:04:25.538123   23599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:04:25.541221   23599 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:04:25.544183   23599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:04:25.547453   23599 config.go:182] Loaded profile config "NoKubernetes-911000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:04:25.547526   23599 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:04:25.547580   23599 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:04:25.551141   23599 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:04:25.558086   23599 start.go:297] selected driver: qemu2
	I0729 05:04:25.558093   23599 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:04:25.558100   23599 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:04:25.560472   23599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:04:25.564169   23599 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:04:25.567288   23599 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 05:04:25.567326   23599 cni.go:84] Creating CNI manager for ""
	I0729 05:04:25.567343   23599 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:04:25.567349   23599 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 05:04:25.567382   23599 start.go:340] cluster config:
	{Name:force-systemd-flag-232000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:04:25.571400   23599 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:04:25.580185   23599 out.go:177] * Starting "force-systemd-flag-232000" primary control-plane node in "force-systemd-flag-232000" cluster
	I0729 05:04:25.584121   23599 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:04:25.584136   23599 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:04:25.584146   23599 cache.go:56] Caching tarball of preloaded images
	I0729 05:04:25.584201   23599 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:04:25.584207   23599 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:04:25.584266   23599 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/force-systemd-flag-232000/config.json ...
	I0729 05:04:25.584277   23599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/force-systemd-flag-232000/config.json: {Name:mk10743ef83969323fe94b15dbdef15215e8e4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:04:25.584514   23599 start.go:360] acquireMachinesLock for force-systemd-flag-232000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:04:25.852141   23599 start.go:364] duration metric: took 267.584292ms to acquireMachinesLock for "force-systemd-flag-232000"
	I0729 05:04:25.852278   23599 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:04:25.852469   23599 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:04:25.863679   23599 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 05:04:25.913111   23599 start.go:159] libmachine.API.Create for "force-systemd-flag-232000" (driver="qemu2")
	I0729 05:04:25.913165   23599 client.go:168] LocalClient.Create starting
	I0729 05:04:25.913308   23599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:04:25.913369   23599 main.go:141] libmachine: Decoding PEM data...
	I0729 05:04:25.913395   23599 main.go:141] libmachine: Parsing certificate...
	I0729 05:04:25.913467   23599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:04:25.913513   23599 main.go:141] libmachine: Decoding PEM data...
	I0729 05:04:25.913532   23599 main.go:141] libmachine: Parsing certificate...
	I0729 05:04:25.914164   23599 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:04:26.161369   23599 main.go:141] libmachine: Creating SSH key...
	I0729 05:04:26.224198   23599 main.go:141] libmachine: Creating Disk image...
	I0729 05:04:26.224203   23599 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:04:26.224421   23599 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/disk.qcow2
	I0729 05:04:26.233677   23599 main.go:141] libmachine: STDOUT: 
	I0729 05:04:26.233694   23599 main.go:141] libmachine: STDERR: 
	I0729 05:04:26.233742   23599 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/disk.qcow2 +20000M
	I0729 05:04:26.241530   23599 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:04:26.241544   23599 main.go:141] libmachine: STDERR: 
	I0729 05:04:26.241576   23599 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/disk.qcow2
	I0729 05:04:26.241581   23599 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:04:26.241602   23599 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:04:26.241630   23599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:b4:ca:76:8f:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/disk.qcow2
	I0729 05:04:26.243236   23599 main.go:141] libmachine: STDOUT: 
	I0729 05:04:26.243250   23599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:04:26.243269   23599 client.go:171] duration metric: took 330.094166ms to LocalClient.Create
	I0729 05:04:28.245393   23599 start.go:128] duration metric: took 2.392943958s to createHost
	I0729 05:04:28.245458   23599 start.go:83] releasing machines lock for "force-systemd-flag-232000", held for 2.393320833s
	W0729 05:04:28.245567   23599 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:04:28.259075   23599 out.go:177] * Deleting "force-systemd-flag-232000" in qemu2 ...
	W0729 05:04:28.287260   23599 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:04:28.287283   23599 start.go:729] Will try again in 5 seconds ...
	I0729 05:04:33.289366   23599 start.go:360] acquireMachinesLock for force-systemd-flag-232000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:04:33.299972   23599 start.go:364] duration metric: took 10.483875ms to acquireMachinesLock for "force-systemd-flag-232000"
	I0729 05:04:33.300059   23599 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:04:33.300221   23599 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:04:33.312797   23599 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 05:04:33.358823   23599 start.go:159] libmachine.API.Create for "force-systemd-flag-232000" (driver="qemu2")
	I0729 05:04:33.358885   23599 client.go:168] LocalClient.Create starting
	I0729 05:04:33.359012   23599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:04:33.359077   23599 main.go:141] libmachine: Decoding PEM data...
	I0729 05:04:33.359091   23599 main.go:141] libmachine: Parsing certificate...
	I0729 05:04:33.359171   23599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:04:33.359215   23599 main.go:141] libmachine: Decoding PEM data...
	I0729 05:04:33.359227   23599 main.go:141] libmachine: Parsing certificate...
	I0729 05:04:33.359704   23599 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:04:33.618237   23599 main.go:141] libmachine: Creating SSH key...
	I0729 05:04:33.794829   23599 main.go:141] libmachine: Creating Disk image...
	I0729 05:04:33.794835   23599 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:04:33.795068   23599 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/disk.qcow2
	I0729 05:04:33.804626   23599 main.go:141] libmachine: STDOUT: 
	I0729 05:04:33.804644   23599 main.go:141] libmachine: STDERR: 
	I0729 05:04:33.804699   23599 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/disk.qcow2 +20000M
	I0729 05:04:33.812671   23599 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:04:33.812684   23599 main.go:141] libmachine: STDERR: 
	I0729 05:04:33.812695   23599 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/disk.qcow2
	I0729 05:04:33.812698   23599 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:04:33.812709   23599 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:04:33.812733   23599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:f6:12:ef:98:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-flag-232000/disk.qcow2
	I0729 05:04:33.814345   23599 main.go:141] libmachine: STDOUT: 
	I0729 05:04:33.814360   23599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:04:33.814381   23599 client.go:171] duration metric: took 455.499459ms to LocalClient.Create
	I0729 05:04:35.816668   23599 start.go:128] duration metric: took 2.516415667s to createHost
	I0729 05:04:35.816768   23599 start.go:83] releasing machines lock for "force-systemd-flag-232000", held for 2.516788458s
	W0729 05:04:35.817091   23599 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-232000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-232000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:04:35.831679   23599 out.go:177] 
	W0729 05:04:35.839821   23599 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:04:35.839864   23599 out.go:239] * 
	* 
	W0729 05:04:35.842552   23599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:04:35.850596   23599 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-232000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-232000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-232000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (82.166125ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-232000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-232000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-232000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-29 05:04:35.953593 -0700 PDT m=+1306.783541792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-232000 -n force-systemd-flag-232000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-232000 -n force-systemd-flag-232000: exit status 7 (34.908084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-232000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-232000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-232000
--- FAIL: TestForceSystemdFlag (10.60s)

                                                
                                    
x
+
TestForceSystemdEnv (9.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-756000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-756000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.78398s)

                                                
                                                
-- stdout --
	* [force-systemd-env-756000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-756000" primary control-plane node in "force-systemd-env-756000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-756000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:04:48.528436   23700 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:04:48.528619   23700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:04:48.528622   23700 out.go:304] Setting ErrFile to fd 2...
	I0729 05:04:48.528624   23700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:04:48.528741   23700 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:04:48.529839   23700 out.go:298] Setting JSON to false
	I0729 05:04:48.546002   23700 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11057,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:04:48.546103   23700 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:04:48.551841   23700 out.go:177] * [force-systemd-env-756000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:04:48.557857   23700 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:04:48.557921   23700 notify.go:220] Checking for updates...
	I0729 05:04:48.566806   23700 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:04:48.569821   23700 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:04:48.573843   23700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:04:48.576856   23700 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:04:48.579839   23700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0729 05:04:48.583227   23700 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:04:48.583298   23700 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:04:48.587794   23700 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:04:48.594832   23700 start.go:297] selected driver: qemu2
	I0729 05:04:48.594840   23700 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:04:48.594846   23700 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:04:48.597284   23700 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:04:48.599829   23700 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:04:48.603921   23700 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 05:04:48.603934   23700 cni.go:84] Creating CNI manager for ""
	I0729 05:04:48.603946   23700 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:04:48.603950   23700 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 05:04:48.603991   23700 start.go:340] cluster config:
	{Name:force-systemd-env-756000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:04:48.607734   23700 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:04:48.615698   23700 out.go:177] * Starting "force-systemd-env-756000" primary control-plane node in "force-systemd-env-756000" cluster
	I0729 05:04:48.619921   23700 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:04:48.619947   23700 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:04:48.619962   23700 cache.go:56] Caching tarball of preloaded images
	I0729 05:04:48.620036   23700 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:04:48.620042   23700 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:04:48.620108   23700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/force-systemd-env-756000/config.json ...
	I0729 05:04:48.620119   23700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/force-systemd-env-756000/config.json: {Name:mka9b537857571343dd04445bcd0ff05feccb845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:04:48.620499   23700 start.go:360] acquireMachinesLock for force-systemd-env-756000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:04:48.620536   23700 start.go:364] duration metric: took 30.417µs to acquireMachinesLock for "force-systemd-env-756000"
	I0729 05:04:48.620550   23700 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:04:48.620580   23700 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:04:48.624730   23700 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 05:04:48.642584   23700 start.go:159] libmachine.API.Create for "force-systemd-env-756000" (driver="qemu2")
	I0729 05:04:48.642611   23700 client.go:168] LocalClient.Create starting
	I0729 05:04:48.642674   23700 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:04:48.642702   23700 main.go:141] libmachine: Decoding PEM data...
	I0729 05:04:48.642709   23700 main.go:141] libmachine: Parsing certificate...
	I0729 05:04:48.642745   23700 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:04:48.642766   23700 main.go:141] libmachine: Decoding PEM data...
	I0729 05:04:48.642774   23700 main.go:141] libmachine: Parsing certificate...
	I0729 05:04:48.643200   23700 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:04:48.792654   23700 main.go:141] libmachine: Creating SSH key...
	I0729 05:04:48.821085   23700 main.go:141] libmachine: Creating Disk image...
	I0729 05:04:48.821090   23700 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:04:48.821269   23700 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/disk.qcow2
	I0729 05:04:48.830506   23700 main.go:141] libmachine: STDOUT: 
	I0729 05:04:48.830528   23700 main.go:141] libmachine: STDERR: 
	I0729 05:04:48.830591   23700 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/disk.qcow2 +20000M
	I0729 05:04:48.838529   23700 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:04:48.838550   23700 main.go:141] libmachine: STDERR: 
	I0729 05:04:48.838571   23700 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/disk.qcow2
	I0729 05:04:48.838576   23700 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:04:48.838587   23700 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:04:48.838612   23700 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:d6:72:82:85:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/disk.qcow2
	I0729 05:04:48.840200   23700 main.go:141] libmachine: STDOUT: 
	I0729 05:04:48.840216   23700 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:04:48.840234   23700 client.go:171] duration metric: took 197.622208ms to LocalClient.Create
	I0729 05:04:50.842410   23700 start.go:128] duration metric: took 2.221847791s to createHost
	I0729 05:04:50.842515   23700 start.go:83] releasing machines lock for "force-systemd-env-756000", held for 2.222008458s
	W0729 05:04:50.842573   23700 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:04:50.858995   23700 out.go:177] * Deleting "force-systemd-env-756000" in qemu2 ...
	W0729 05:04:50.880893   23700 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:04:50.880917   23700 start.go:729] Will try again in 5 seconds ...
	I0729 05:04:55.883039   23700 start.go:360] acquireMachinesLock for force-systemd-env-756000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:04:55.883581   23700 start.go:364] duration metric: took 399.958µs to acquireMachinesLock for "force-systemd-env-756000"
	I0729 05:04:55.883793   23700 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:04:55.884083   23700 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:04:55.903666   23700 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 05:04:55.953265   23700 start.go:159] libmachine.API.Create for "force-systemd-env-756000" (driver="qemu2")
	I0729 05:04:55.953320   23700 client.go:168] LocalClient.Create starting
	I0729 05:04:55.953510   23700 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:04:55.953592   23700 main.go:141] libmachine: Decoding PEM data...
	I0729 05:04:55.953612   23700 main.go:141] libmachine: Parsing certificate...
	I0729 05:04:55.953678   23700 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:04:55.953726   23700 main.go:141] libmachine: Decoding PEM data...
	I0729 05:04:55.953740   23700 main.go:141] libmachine: Parsing certificate...
	I0729 05:04:55.954224   23700 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:04:56.111661   23700 main.go:141] libmachine: Creating SSH key...
	I0729 05:04:56.220105   23700 main.go:141] libmachine: Creating Disk image...
	I0729 05:04:56.220117   23700 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:04:56.220328   23700 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/disk.qcow2
	I0729 05:04:56.230194   23700 main.go:141] libmachine: STDOUT: 
	I0729 05:04:56.230211   23700 main.go:141] libmachine: STDERR: 
	I0729 05:04:56.230258   23700 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/disk.qcow2 +20000M
	I0729 05:04:56.237992   23700 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:04:56.238005   23700 main.go:141] libmachine: STDERR: 
	I0729 05:04:56.238018   23700 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/disk.qcow2
	I0729 05:04:56.238022   23700 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:04:56.238033   23700 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:04:56.238056   23700 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:67:d1:89:27:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/force-systemd-env-756000/disk.qcow2
	I0729 05:04:56.239661   23700 main.go:141] libmachine: STDOUT: 
	I0729 05:04:56.239675   23700 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:04:56.239694   23700 client.go:171] duration metric: took 286.3695ms to LocalClient.Create
	I0729 05:04:58.241841   23700 start.go:128] duration metric: took 2.357756917s to createHost
	I0729 05:04:58.241922   23700 start.go:83] releasing machines lock for "force-systemd-env-756000", held for 2.358327167s
	W0729 05:04:58.242306   23700 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-756000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-756000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:04:58.257944   23700 out.go:177] 
	W0729 05:04:58.263007   23700 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:04:58.263051   23700 out.go:239] * 
	* 
	W0729 05:04:58.264901   23700 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:04:58.274896   23700 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-756000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-756000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-756000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (68.110416ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-756000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-756000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-29 05:04:58.354135 -0700 PDT m=+1329.184496501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-756000 -n force-systemd-env-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-756000 -n force-systemd-env-756000: exit status 7 (35.567542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-756000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-756000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-756000
--- FAIL: TestForceSystemdEnv (9.99s)

                                                
                                    
x
+
TestErrorSpam/setup (9.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-862000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-862000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 --driver=qemu2 : exit status 80 (9.807839333s)

                                                
                                                
-- stdout --
	* [nospam-862000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-862000" primary control-plane node in "nospam-862000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-862000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-862000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-862000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-862000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-862000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19338
- KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-862000" primary control-plane node in "nospam-862000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-862000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-862000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.81s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-051000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-051000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (10.014138708s)

                                                
                                                
-- stdout --
	* [functional-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-051000" primary control-plane node in "functional-051000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-051000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53934 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53934 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53934 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-051000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-051000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19338
- KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-051000" primary control-plane node in "functional-051000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-051000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:53934 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:53934 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:53934 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-051000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (73.292875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.09s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-051000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-051000 --alsologtostderr -v=8: exit status 80 (5.183629375s)

                                                
                                                
-- stdout --
	* [functional-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-051000" primary control-plane node in "functional-051000" cluster
	* Restarting existing qemu2 VM for "functional-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:43:54.923696   21754 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:43:54.923826   21754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:43:54.923829   21754 out.go:304] Setting ErrFile to fd 2...
	I0729 04:43:54.923831   21754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:43:54.923961   21754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:43:54.924922   21754 out.go:298] Setting JSON to false
	I0729 04:43:54.941105   21754 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9803,"bootTime":1722243631,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:43:54.941178   21754 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:43:54.946279   21754 out.go:177] * [functional-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:43:54.955120   21754 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:43:54.955174   21754 notify.go:220] Checking for updates...
	I0729 04:43:54.962144   21754 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:43:54.965153   21754 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:43:54.968115   21754 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:43:54.971126   21754 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:43:54.974106   21754 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:43:54.977396   21754 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:43:54.977449   21754 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:43:54.981100   21754 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:43:54.988198   21754 start.go:297] selected driver: qemu2
	I0729 04:43:54.988204   21754 start.go:901] validating driver "qemu2" against &{Name:functional-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:43:54.988278   21754 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:43:54.990544   21754 cni.go:84] Creating CNI manager for ""
	I0729 04:43:54.990570   21754 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:43:54.990620   21754 start.go:340] cluster config:
	{Name:functional-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-051000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:43:54.994203   21754 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:43:55.003071   21754 out.go:177] * Starting "functional-051000" primary control-plane node in "functional-051000" cluster
	I0729 04:43:55.006072   21754 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:43:55.006092   21754 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:43:55.006103   21754 cache.go:56] Caching tarball of preloaded images
	I0729 04:43:55.006165   21754 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:43:55.006173   21754 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:43:55.006250   21754 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/functional-051000/config.json ...
	I0729 04:43:55.006755   21754 start.go:360] acquireMachinesLock for functional-051000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:43:55.006783   21754 start.go:364] duration metric: took 22.209µs to acquireMachinesLock for "functional-051000"
	I0729 04:43:55.006793   21754 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:43:55.006798   21754 fix.go:54] fixHost starting: 
	I0729 04:43:55.006928   21754 fix.go:112] recreateIfNeeded on functional-051000: state=Stopped err=<nil>
	W0729 04:43:55.006937   21754 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:43:55.014962   21754 out.go:177] * Restarting existing qemu2 VM for "functional-051000" ...
	I0729 04:43:55.019071   21754 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:43:55.019108   21754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:93:73:60:03:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/disk.qcow2
	I0729 04:43:55.021219   21754 main.go:141] libmachine: STDOUT: 
	I0729 04:43:55.021246   21754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:43:55.021274   21754 fix.go:56] duration metric: took 14.475583ms for fixHost
	I0729 04:43:55.021277   21754 start.go:83] releasing machines lock for "functional-051000", held for 14.490667ms
	W0729 04:43:55.021285   21754 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:43:55.021317   21754 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:43:55.021322   21754 start.go:729] Will try again in 5 seconds ...
	I0729 04:44:00.023438   21754 start.go:360] acquireMachinesLock for functional-051000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:44:00.023892   21754 start.go:364] duration metric: took 354.084µs to acquireMachinesLock for "functional-051000"
	I0729 04:44:00.024053   21754 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:44:00.024071   21754 fix.go:54] fixHost starting: 
	I0729 04:44:00.024739   21754 fix.go:112] recreateIfNeeded on functional-051000: state=Stopped err=<nil>
	W0729 04:44:00.024763   21754 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:44:00.027270   21754 out.go:177] * Restarting existing qemu2 VM for "functional-051000" ...
	I0729 04:44:00.031193   21754 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:44:00.031390   21754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:93:73:60:03:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/disk.qcow2
	I0729 04:44:00.040719   21754 main.go:141] libmachine: STDOUT: 
	I0729 04:44:00.040776   21754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:44:00.040846   21754 fix.go:56] duration metric: took 16.772167ms for fixHost
	I0729 04:44:00.040858   21754 start.go:83] releasing machines lock for "functional-051000", held for 16.948166ms
	W0729 04:44:00.040998   21754 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:44:00.048237   21754 out.go:177] 
	W0729 04:44:00.052217   21754 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:44:00.052240   21754 out.go:239] * 
	* 
	W0729 04:44:00.054954   21754 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:44:00.063158   21754 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-051000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.185489s for "functional-051000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (69.083417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (32.2225ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-051000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (30.304917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-051000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-051000 get po -A: exit status 1 (26.559583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-051000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-051000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-051000\n"*: args "kubectl --context functional-051000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-051000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (30.477333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh sudo crictl images: exit status 83 (42.763333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-051000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (41.985208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-051000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.804958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.984208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-051000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 kubectl -- --context functional-051000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 kubectl -- --context functional-051000 get pods: exit status 1 (707.036166ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-051000
	* no server found for cluster "functional-051000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-051000 kubectl -- --context functional-051000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (32.723625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-051000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-051000 get pods: exit status 1 (944.687083ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-051000
	* no server found for cluster "functional-051000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-051000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (29.333542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-051000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-051000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.211487708s)

                                                
                                                
-- stdout --
	* [functional-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-051000" primary control-plane node in "functional-051000" cluster
	* Restarting existing qemu2 VM for "functional-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-051000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.212191542s for "functional-051000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (72.007667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.29s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-051000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-051000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.96375ms)

                                                
                                                
** stderr ** 
	error: context "functional-051000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-051000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (30.37425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 logs: exit status 83 (75.613208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:42 PDT |                     |
	|         | -p download-only-008000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| delete  | -p download-only-008000                                                  | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| start   | -o=json --download-only                                                  | download-only-106000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | -p download-only-106000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| delete  | -p download-only-106000                                                  | download-only-106000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| start   | -o=json --download-only                                                  | download-only-942000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | -p download-only-942000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| delete  | -p download-only-942000                                                  | download-only-942000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| delete  | -p download-only-008000                                                  | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| delete  | -p download-only-106000                                                  | download-only-106000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| delete  | -p download-only-942000                                                  | download-only-942000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| start   | --download-only -p                                                       | binary-mirror-188000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | binary-mirror-188000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:53903                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-188000                                                  | binary-mirror-188000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| addons  | enable dashboard -p                                                      | addons-338000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | addons-338000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-338000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | addons-338000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-338000 --wait=true                                             | addons-338000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-338000                                                         | addons-338000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| start   | -p nospam-862000 -n=1 --memory=2250 --wait=false                         | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-862000                                                         | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| start   | -p functional-051000                                                     | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-051000                                                     | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-051000 cache add                                              | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-051000 cache add                                              | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-051000 cache add                                              | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-051000 cache add                                              | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
	|         | minikube-local-cache-test:functional-051000                              |                      |         |         |                     |                     |
	| cache   | functional-051000 cache delete                                           | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
	|         | minikube-local-cache-test:functional-051000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
	| ssh     | functional-051000 ssh sudo                                               | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-051000                                                        | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-051000 ssh                                                    | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-051000 cache reload                                           | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
	| ssh     | functional-051000 ssh                                                    | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-051000 kubectl --                                             | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
	|         | --context functional-051000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-051000                                                     | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 04:44:05
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 04:44:05.197322   21829 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:44:05.197427   21829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:44:05.197429   21829 out.go:304] Setting ErrFile to fd 2...
	I0729 04:44:05.197431   21829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:44:05.197565   21829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:44:05.198584   21829 out.go:298] Setting JSON to false
	I0729 04:44:05.214353   21829 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9814,"bootTime":1722243631,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:44:05.214424   21829 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:44:05.219802   21829 out.go:177] * [functional-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:44:05.228843   21829 notify.go:220] Checking for updates...
	I0729 04:44:05.233716   21829 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:44:05.241783   21829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:44:05.248744   21829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:44:05.259788   21829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:44:05.267684   21829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:44:05.273738   21829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:44:05.277936   21829 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:44:05.277985   21829 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:44:05.281662   21829 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:44:05.290788   21829 start.go:297] selected driver: qemu2
	I0729 04:44:05.290790   21829 start.go:901] validating driver "qemu2" against &{Name:functional-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:44:05.290839   21829 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:44:05.293417   21829 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:44:05.293452   21829 cni.go:84] Creating CNI manager for ""
	I0729 04:44:05.293463   21829 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:44:05.293508   21829 start.go:340] cluster config:
	{Name:functional-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-051000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:44:05.297554   21829 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:44:05.306740   21829 out.go:177] * Starting "functional-051000" primary control-plane node in "functional-051000" cluster
	I0729 04:44:05.310729   21829 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:44:05.310750   21829 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:44:05.310761   21829 cache.go:56] Caching tarball of preloaded images
	I0729 04:44:05.310833   21829 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:44:05.310845   21829 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:44:05.310904   21829 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/functional-051000/config.json ...
	I0729 04:44:05.311395   21829 start.go:360] acquireMachinesLock for functional-051000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:44:05.311436   21829 start.go:364] duration metric: took 35.583µs to acquireMachinesLock for "functional-051000"
	I0729 04:44:05.311445   21829 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:44:05.311450   21829 fix.go:54] fixHost starting: 
	I0729 04:44:05.311585   21829 fix.go:112] recreateIfNeeded on functional-051000: state=Stopped err=<nil>
	W0729 04:44:05.311592   21829 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:44:05.319731   21829 out.go:177] * Restarting existing qemu2 VM for "functional-051000" ...
	I0729 04:44:05.323715   21829 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:44:05.323757   21829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:93:73:60:03:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/disk.qcow2
	I0729 04:44:05.325987   21829 main.go:141] libmachine: STDOUT: 
	I0729 04:44:05.326003   21829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:44:05.326031   21829 fix.go:56] duration metric: took 14.580667ms for fixHost
	I0729 04:44:05.326033   21829 start.go:83] releasing machines lock for "functional-051000", held for 14.594667ms
	W0729 04:44:05.326039   21829 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:44:05.326077   21829 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:44:05.326083   21829 start.go:729] Will try again in 5 seconds ...
	I0729 04:44:10.328145   21829 start.go:360] acquireMachinesLock for functional-051000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:44:10.328472   21829 start.go:364] duration metric: took 286.125µs to acquireMachinesLock for "functional-051000"
	I0729 04:44:10.328588   21829 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:44:10.328602   21829 fix.go:54] fixHost starting: 
	I0729 04:44:10.329230   21829 fix.go:112] recreateIfNeeded on functional-051000: state=Stopped err=<nil>
	W0729 04:44:10.329249   21829 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:44:10.337543   21829 out.go:177] * Restarting existing qemu2 VM for "functional-051000" ...
	I0729 04:44:10.341568   21829 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:44:10.341733   21829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:93:73:60:03:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/disk.qcow2
	I0729 04:44:10.350411   21829 main.go:141] libmachine: STDOUT: 
	I0729 04:44:10.350453   21829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:44:10.350527   21829 fix.go:56] duration metric: took 21.931042ms for fixHost
	I0729 04:44:10.350538   21829 start.go:83] releasing machines lock for "functional-051000", held for 22.054959ms
	W0729 04:44:10.350734   21829 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:44:10.356563   21829 out.go:177] 
	W0729 04:44:10.360602   21829 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:44:10.360624   21829 out.go:239] * 
	W0729 04:44:10.363324   21829 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:44:10.369553   21829 out.go:177] 
	
	
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-051000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:42 PDT |                     |
|         | -p download-only-008000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| delete  | -p download-only-008000                                                  | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| start   | -o=json --download-only                                                  | download-only-106000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | -p download-only-106000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| delete  | -p download-only-106000                                                  | download-only-106000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| start   | -o=json --download-only                                                  | download-only-942000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | -p download-only-942000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| delete  | -p download-only-942000                                                  | download-only-942000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| delete  | -p download-only-008000                                                  | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| delete  | -p download-only-106000                                                  | download-only-106000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| delete  | -p download-only-942000                                                  | download-only-942000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| start   | --download-only -p                                                       | binary-mirror-188000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | binary-mirror-188000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:53903                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-188000                                                  | binary-mirror-188000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| addons  | enable dashboard -p                                                      | addons-338000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | addons-338000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-338000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | addons-338000                                                            |                      |         |         |                     |                     |
| start   | -p addons-338000 --wait=true                                             | addons-338000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-338000                                                         | addons-338000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| start   | -p nospam-862000 -n=1 --memory=2250 --wait=false                         | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-862000                                                         | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| start   | -p functional-051000                                                     | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-051000                                                     | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-051000 cache add                                              | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-051000 cache add                                              | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-051000 cache add                                              | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-051000 cache add                                              | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | minikube-local-cache-test:functional-051000                              |                      |         |         |                     |                     |
| cache   | functional-051000 cache delete                                           | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | minikube-local-cache-test:functional-051000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
| ssh     | functional-051000 ssh sudo                                               | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-051000                                                        | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-051000 ssh                                                    | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-051000 cache reload                                           | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
| ssh     | functional-051000 ssh                                                    | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-051000 kubectl --                                             | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
|         | --context functional-051000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-051000                                                     | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/29 04:44:05
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0729 04:44:05.197322   21829 out.go:291] Setting OutFile to fd 1 ...
I0729 04:44:05.197427   21829 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:05.197429   21829 out.go:304] Setting ErrFile to fd 2...
I0729 04:44:05.197431   21829 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:05.197565   21829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
I0729 04:44:05.198584   21829 out.go:298] Setting JSON to false
I0729 04:44:05.214353   21829 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9814,"bootTime":1722243631,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0729 04:44:05.214424   21829 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0729 04:44:05.219802   21829 out.go:177] * [functional-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0729 04:44:05.228843   21829 notify.go:220] Checking for updates...
I0729 04:44:05.233716   21829 out.go:177]   - MINIKUBE_LOCATION=19338
I0729 04:44:05.241783   21829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
I0729 04:44:05.248744   21829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0729 04:44:05.259788   21829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0729 04:44:05.267684   21829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
I0729 04:44:05.273738   21829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0729 04:44:05.277936   21829 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:44:05.277985   21829 driver.go:392] Setting default libvirt URI to qemu:///system
I0729 04:44:05.281662   21829 out.go:177] * Using the qemu2 driver based on existing profile
I0729 04:44:05.290788   21829 start.go:297] selected driver: qemu2
I0729 04:44:05.290790   21829 start.go:901] validating driver "qemu2" against &{Name:functional-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 04:44:05.290839   21829 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0729 04:44:05.293417   21829 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0729 04:44:05.293452   21829 cni.go:84] Creating CNI manager for ""
I0729 04:44:05.293463   21829 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0729 04:44:05.293508   21829 start.go:340] cluster config:
{Name:functional-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-051000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 04:44:05.297554   21829 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0729 04:44:05.306740   21829 out.go:177] * Starting "functional-051000" primary control-plane node in "functional-051000" cluster
I0729 04:44:05.310729   21829 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 04:44:05.310750   21829 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 04:44:05.310761   21829 cache.go:56] Caching tarball of preloaded images
I0729 04:44:05.310833   21829 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 04:44:05.310845   21829 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 04:44:05.310904   21829 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/functional-051000/config.json ...
I0729 04:44:05.311395   21829 start.go:360] acquireMachinesLock for functional-051000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 04:44:05.311436   21829 start.go:364] duration metric: took 35.583µs to acquireMachinesLock for "functional-051000"
I0729 04:44:05.311445   21829 start.go:96] Skipping create...Using existing machine configuration
I0729 04:44:05.311450   21829 fix.go:54] fixHost starting: 
I0729 04:44:05.311585   21829 fix.go:112] recreateIfNeeded on functional-051000: state=Stopped err=<nil>
W0729 04:44:05.311592   21829 fix.go:138] unexpected machine state, will restart: <nil>
I0729 04:44:05.319731   21829 out.go:177] * Restarting existing qemu2 VM for "functional-051000" ...
I0729 04:44:05.323715   21829 qemu.go:418] Using hvf for hardware acceleration
I0729 04:44:05.323757   21829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:93:73:60:03:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/disk.qcow2
I0729 04:44:05.325987   21829 main.go:141] libmachine: STDOUT: 
I0729 04:44:05.326003   21829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 04:44:05.326031   21829 fix.go:56] duration metric: took 14.580667ms for fixHost
I0729 04:44:05.326033   21829 start.go:83] releasing machines lock for "functional-051000", held for 14.594667ms
W0729 04:44:05.326039   21829 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 04:44:05.326077   21829 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 04:44:05.326083   21829 start.go:729] Will try again in 5 seconds ...
I0729 04:44:10.328145   21829 start.go:360] acquireMachinesLock for functional-051000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 04:44:10.328472   21829 start.go:364] duration metric: took 286.125µs to acquireMachinesLock for "functional-051000"
I0729 04:44:10.328588   21829 start.go:96] Skipping create...Using existing machine configuration
I0729 04:44:10.328602   21829 fix.go:54] fixHost starting: 
I0729 04:44:10.329230   21829 fix.go:112] recreateIfNeeded on functional-051000: state=Stopped err=<nil>
W0729 04:44:10.329249   21829 fix.go:138] unexpected machine state, will restart: <nil>
I0729 04:44:10.337543   21829 out.go:177] * Restarting existing qemu2 VM for "functional-051000" ...
I0729 04:44:10.341568   21829 qemu.go:418] Using hvf for hardware acceleration
I0729 04:44:10.341733   21829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:93:73:60:03:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/disk.qcow2
I0729 04:44:10.350411   21829 main.go:141] libmachine: STDOUT: 
I0729 04:44:10.350453   21829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 04:44:10.350527   21829 fix.go:56] duration metric: took 21.931042ms for fixHost
I0729 04:44:10.350538   21829 start.go:83] releasing machines lock for "functional-051000", held for 22.054959ms
W0729 04:44:10.350734   21829 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 04:44:10.356563   21829 out.go:177] 
W0729 04:44:10.360602   21829 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 04:44:10.360624   21829 out.go:239] * 
W0729 04:44:10.363324   21829 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 04:44:10.369553   21829 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd692806561/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:42 PDT |                     |
|         | -p download-only-008000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| delete  | -p download-only-008000                                                  | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| start   | -o=json --download-only                                                  | download-only-106000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | -p download-only-106000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| delete  | -p download-only-106000                                                  | download-only-106000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| start   | -o=json --download-only                                                  | download-only-942000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | -p download-only-942000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| delete  | -p download-only-942000                                                  | download-only-942000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| delete  | -p download-only-008000                                                  | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| delete  | -p download-only-106000                                                  | download-only-106000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| delete  | -p download-only-942000                                                  | download-only-942000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| start   | --download-only -p                                                       | binary-mirror-188000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | binary-mirror-188000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:53903                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-188000                                                  | binary-mirror-188000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| addons  | enable dashboard -p                                                      | addons-338000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | addons-338000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-338000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | addons-338000                                                            |                      |         |         |                     |                     |
| start   | -p addons-338000 --wait=true                                             | addons-338000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-338000                                                         | addons-338000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| start   | -p nospam-862000 -n=1 --memory=2250 --wait=false                         | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-862000 --log_dir                                                  | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-862000                                                         | nospam-862000        | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
| start   | -p functional-051000                                                     | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-051000                                                     | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-051000 cache add                                              | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-051000 cache add                                              | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-051000 cache add                                              | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-051000 cache add                                              | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | minikube-local-cache-test:functional-051000                              |                      |         |         |                     |                     |
| cache   | functional-051000 cache delete                                           | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | minikube-local-cache-test:functional-051000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
| ssh     | functional-051000 ssh sudo                                               | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-051000                                                        | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-051000 ssh                                                    | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-051000 cache reload                                           | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
| ssh     | functional-051000 ssh                                                    | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT | 29 Jul 24 04:44 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-051000 kubectl --                                             | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
|         | --context functional-051000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-051000                                                     | functional-051000    | jenkins | v1.33.1 | 29 Jul 24 04:44 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/29 04:44:05
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0729 04:44:05.197322   21829 out.go:291] Setting OutFile to fd 1 ...
I0729 04:44:05.197427   21829 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:05.197429   21829 out.go:304] Setting ErrFile to fd 2...
I0729 04:44:05.197431   21829 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:05.197565   21829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
I0729 04:44:05.198584   21829 out.go:298] Setting JSON to false
I0729 04:44:05.214353   21829 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9814,"bootTime":1722243631,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0729 04:44:05.214424   21829 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0729 04:44:05.219802   21829 out.go:177] * [functional-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0729 04:44:05.228843   21829 notify.go:220] Checking for updates...
I0729 04:44:05.233716   21829 out.go:177]   - MINIKUBE_LOCATION=19338
I0729 04:44:05.241783   21829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
I0729 04:44:05.248744   21829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0729 04:44:05.259788   21829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0729 04:44:05.267684   21829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
I0729 04:44:05.273738   21829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0729 04:44:05.277936   21829 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:44:05.277985   21829 driver.go:392] Setting default libvirt URI to qemu:///system
I0729 04:44:05.281662   21829 out.go:177] * Using the qemu2 driver based on existing profile
I0729 04:44:05.290788   21829 start.go:297] selected driver: qemu2
I0729 04:44:05.290790   21829 start.go:901] validating driver "qemu2" against &{Name:functional-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 04:44:05.290839   21829 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0729 04:44:05.293417   21829 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0729 04:44:05.293452   21829 cni.go:84] Creating CNI manager for ""
I0729 04:44:05.293463   21829 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0729 04:44:05.293508   21829 start.go:340] cluster config:
{Name:functional-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-051000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 04:44:05.297554   21829 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0729 04:44:05.306740   21829 out.go:177] * Starting "functional-051000" primary control-plane node in "functional-051000" cluster
I0729 04:44:05.310729   21829 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 04:44:05.310750   21829 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 04:44:05.310761   21829 cache.go:56] Caching tarball of preloaded images
I0729 04:44:05.310833   21829 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 04:44:05.310845   21829 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 04:44:05.310904   21829 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/functional-051000/config.json ...
I0729 04:44:05.311395   21829 start.go:360] acquireMachinesLock for functional-051000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 04:44:05.311436   21829 start.go:364] duration metric: took 35.583µs to acquireMachinesLock for "functional-051000"
I0729 04:44:05.311445   21829 start.go:96] Skipping create...Using existing machine configuration
I0729 04:44:05.311450   21829 fix.go:54] fixHost starting: 
I0729 04:44:05.311585   21829 fix.go:112] recreateIfNeeded on functional-051000: state=Stopped err=<nil>
W0729 04:44:05.311592   21829 fix.go:138] unexpected machine state, will restart: <nil>
I0729 04:44:05.319731   21829 out.go:177] * Restarting existing qemu2 VM for "functional-051000" ...
I0729 04:44:05.323715   21829 qemu.go:418] Using hvf for hardware acceleration
I0729 04:44:05.323757   21829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:93:73:60:03:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/disk.qcow2
I0729 04:44:05.325987   21829 main.go:141] libmachine: STDOUT: 
I0729 04:44:05.326003   21829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 04:44:05.326031   21829 fix.go:56] duration metric: took 14.580667ms for fixHost
I0729 04:44:05.326033   21829 start.go:83] releasing machines lock for "functional-051000", held for 14.594667ms
W0729 04:44:05.326039   21829 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 04:44:05.326077   21829 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 04:44:05.326083   21829 start.go:729] Will try again in 5 seconds ...
I0729 04:44:10.328145   21829 start.go:360] acquireMachinesLock for functional-051000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 04:44:10.328472   21829 start.go:364] duration metric: took 286.125µs to acquireMachinesLock for "functional-051000"
I0729 04:44:10.328588   21829 start.go:96] Skipping create...Using existing machine configuration
I0729 04:44:10.328602   21829 fix.go:54] fixHost starting: 
I0729 04:44:10.329230   21829 fix.go:112] recreateIfNeeded on functional-051000: state=Stopped err=<nil>
W0729 04:44:10.329249   21829 fix.go:138] unexpected machine state, will restart: <nil>
I0729 04:44:10.337543   21829 out.go:177] * Restarting existing qemu2 VM for "functional-051000" ...
I0729 04:44:10.341568   21829 qemu.go:418] Using hvf for hardware acceleration
I0729 04:44:10.341733   21829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:93:73:60:03:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/functional-051000/disk.qcow2
I0729 04:44:10.350411   21829 main.go:141] libmachine: STDOUT: 
I0729 04:44:10.350453   21829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 04:44:10.350527   21829 fix.go:56] duration metric: took 21.931042ms for fixHost
I0729 04:44:10.350538   21829 start.go:83] releasing machines lock for "functional-051000", held for 22.054959ms
W0729 04:44:10.350734   21829 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 04:44:10.356563   21829 out.go:177] 
W0729 04:44:10.360602   21829 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 04:44:10.360624   21829 out.go:239] * 
W0729 04:44:10.363324   21829 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 04:44:10.369553   21829 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-051000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-051000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.805625ms)

                                                
                                                
** stderr ** 
	error: context "functional-051000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-051000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-051000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-051000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-051000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-051000 --alsologtostderr -v=1] stderr:
I0729 04:44:54.523364   22093 out.go:291] Setting OutFile to fd 1 ...
I0729 04:44:54.523803   22093 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:54.523807   22093 out.go:304] Setting ErrFile to fd 2...
I0729 04:44:54.523809   22093 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:54.523995   22093 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
I0729 04:44:54.524221   22093 mustload.go:65] Loading cluster: functional-051000
I0729 04:44:54.524423   22093 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:44:54.529215   22093 out.go:177] * The control-plane node functional-051000 host is not running: state=Stopped
I0729 04:44:54.532248   22093 out.go:177]   To start a cluster, run: "minikube start -p functional-051000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (42.086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 status: exit status 7 (72.375375ms)

                                                
                                                
-- stdout --
	functional-051000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-051000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (33.830042ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-051000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 status -o json: exit status 7 (30.531709ms)

                                                
                                                
-- stdout --
	{"Name":"functional-051000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-051000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (30.362542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-051000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-051000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.108167ms)

                                                
                                                
** stderr ** 
	error: context "functional-051000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-051000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-051000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-051000 describe po hello-node-connect: exit status 1 (25.948583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-051000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-051000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-051000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-051000 logs -l app=hello-node-connect: exit status 1 (25.799958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-051000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-051000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-051000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-051000 describe svc hello-node-connect: exit status 1 (25.902166ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-051000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-051000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (28.8265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-051000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (29.439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "echo hello": exit status 83 (42.49325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-051000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-051000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-051000\"\n"*. args "out/minikube-darwin-arm64 -p functional-051000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "cat /etc/hostname": exit status 83 (42.994334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-051000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-051000"- but got *"* The control-plane node functional-051000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-051000\"\n"*. args "out/minikube-darwin-arm64 -p functional-051000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (29.47425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (42.40025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-051000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh -n functional-051000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh -n functional-051000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.952792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-051000 ssh -n functional-051000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-051000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-051000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 cp functional-051000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd4091169212/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 cp functional-051000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd4091169212/001/cp-test.txt: exit status 83 (40.598875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-051000 cp functional-051000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd4091169212/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh -n functional-051000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh -n functional-051000 "sudo cat /home/docker/cp-test.txt": exit status 83 (39.770708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-051000 ssh -n functional-051000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd4091169212/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-051000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-051000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (44.932084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-051000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh -n functional-051000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh -n functional-051000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (41.768875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-051000 ssh -n functional-051000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-051000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-051000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/21508/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /etc/test/nested/copy/21508/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /etc/test/nested/copy/21508/hosts": exit status 83 (39.927209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /etc/test/nested/copy/21508/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-051000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-051000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (29.054417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/21508.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /etc/ssl/certs/21508.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /etc/ssl/certs/21508.pem": exit status 83 (39.326667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/21508.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-051000 ssh \"sudo cat /etc/ssl/certs/21508.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/21508.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-051000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-051000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/21508.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /usr/share/ca-certificates/21508.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /usr/share/ca-certificates/21508.pem": exit status 83 (38.17825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/21508.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-051000 ssh \"sudo cat /usr/share/ca-certificates/21508.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/21508.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-051000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-051000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (41.678667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-051000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-051000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-051000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/215082.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /etc/ssl/certs/215082.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /etc/ssl/certs/215082.pem": exit status 83 (39.827542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/215082.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-051000 ssh \"sudo cat /etc/ssl/certs/215082.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/215082.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-051000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-051000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/215082.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /usr/share/ca-certificates/215082.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /usr/share/ca-certificates/215082.pem": exit status 83 (39.585167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/215082.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-051000 ssh \"sudo cat /usr/share/ca-certificates/215082.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/215082.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-051000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-051000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (41.728958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-051000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-051000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-051000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (29.790625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-051000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-051000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.987959ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-051000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-051000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-051000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-051000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-051000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-051000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-051000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-051000 -n functional-051000: exit status 7 (30.481625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "sudo systemctl is-active crio": exit status 83 (51.167458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-051000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-051000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 version -o=json --components: exit status 83 (38.994333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-051000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-051000 image ls --format short --alsologtostderr:
I0729 04:44:55.217313   22122 out.go:291] Setting OutFile to fd 1 ...
I0729 04:44:55.217454   22122 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:55.217457   22122 out.go:304] Setting ErrFile to fd 2...
I0729 04:44:55.217460   22122 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:55.217587   22122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
I0729 04:44:55.217987   22122 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:44:55.218043   22122 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-051000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-051000 image ls --format table --alsologtostderr:
I0729 04:44:55.431816   22135 out.go:291] Setting OutFile to fd 1 ...
I0729 04:44:55.431957   22135 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:55.431960   22135 out.go:304] Setting ErrFile to fd 2...
I0729 04:44:55.431963   22135 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:55.432085   22135 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
I0729 04:44:55.432538   22135 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:44:55.432600   22135 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-051000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-051000 image ls --format json --alsologtostderr:
I0729 04:44:55.396462   22132 out.go:291] Setting OutFile to fd 1 ...
I0729 04:44:55.396837   22132 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:55.396843   22132 out.go:304] Setting ErrFile to fd 2...
I0729 04:44:55.396846   22132 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:55.397020   22132 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
I0729 04:44:55.397720   22132 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:44:55.397793   22132 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-051000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-051000 image ls --format yaml --alsologtostderr:
I0729 04:44:55.252847   22124 out.go:291] Setting OutFile to fd 1 ...
I0729 04:44:55.252985   22124 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:55.252989   22124 out.go:304] Setting ErrFile to fd 2...
I0729 04:44:55.252991   22124 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:55.253149   22124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
I0729 04:44:55.253585   22124 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:44:55.253646   22124 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh pgrep buildkitd: exit status 83 (38.939667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image build -t localhost/my-image:functional-051000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-051000 image build -t localhost/my-image:functional-051000 testdata/build --alsologtostderr:
I0729 04:44:55.326004   22128 out.go:291] Setting OutFile to fd 1 ...
I0729 04:44:55.326340   22128 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:55.326343   22128 out.go:304] Setting ErrFile to fd 2...
I0729 04:44:55.326346   22128 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:55.326482   22128 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
I0729 04:44:55.326859   22128 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:44:55.327273   22128 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:44:55.327502   22128 build_images.go:133] succeeded building to: 
I0729 04:44:55.327505   22128 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image ls
functional_test.go:442: expected "localhost/my-image:functional-051000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-051000 docker-env) && out/minikube-darwin-arm64 status -p functional-051000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-051000 docker-env) && out/minikube-darwin-arm64 status -p functional-051000": exit status 1 (42.82525ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 update-context --alsologtostderr -v=2: exit status 83 (45.763833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:44:55.090153   22111 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:44:55.090909   22111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:44:55.090912   22111 out.go:304] Setting ErrFile to fd 2...
	I0729 04:44:55.090915   22111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:44:55.091045   22111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:44:55.091239   22111 mustload.go:65] Loading cluster: functional-051000
	I0729 04:44:55.091417   22111 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:44:55.095534   22111 out.go:177] * The control-plane node functional-051000 host is not running: state=Stopped
	I0729 04:44:55.102538   22111 out.go:177]   To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-051000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-051000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-051000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 update-context --alsologtostderr -v=2: exit status 83 (40.550833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:44:55.135654   22114 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:44:55.135794   22114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:44:55.135798   22114 out.go:304] Setting ErrFile to fd 2...
	I0729 04:44:55.135800   22114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:44:55.135932   22114 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:44:55.136131   22114 mustload.go:65] Loading cluster: functional-051000
	I0729 04:44:55.136309   22114 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:44:55.140423   22114 out.go:177] * The control-plane node functional-051000 host is not running: state=Stopped
	I0729 04:44:55.143532   22114 out.go:177]   To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-051000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-051000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-051000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 update-context --alsologtostderr -v=2: exit status 83 (40.687166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:44:55.177043   22118 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:44:55.177181   22118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:44:55.177185   22118 out.go:304] Setting ErrFile to fd 2...
	I0729 04:44:55.177187   22118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:44:55.177308   22118 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:44:55.177562   22118 mustload.go:65] Loading cluster: functional-051000
	I0729 04:44:55.177747   22118 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:44:55.182485   22118 out.go:177] * The control-plane node functional-051000 host is not running: state=Stopped
	I0729 04:44:55.185565   22118 out.go:177]   To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-051000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-051000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-051000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-051000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-051000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0729 04:44:11.789397   21911 out.go:291] Setting OutFile to fd 1 ...
I0729 04:44:11.789549   21911 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:11.789555   21911 out.go:304] Setting ErrFile to fd 2...
I0729 04:44:11.789557   21911 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:44:11.789688   21911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
I0729 04:44:11.789940   21911 mustload.go:65] Loading cluster: functional-051000
I0729 04:44:11.790169   21911 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:44:11.795199   21911 out.go:177] * The control-plane node functional-051000 host is not running: state=Stopped
I0729 04:44:11.806045   21911 out.go:177]   To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
stdout: * The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-051000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-051000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-051000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-051000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 21912: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-051000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-051000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-051000": client config: context "functional-051000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (82.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-051000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-051000 get svc nginx-svc: exit status 1 (69.321167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-051000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-051000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (82.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image load --daemon docker.io/kicbase/echo-server:functional-051000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-051000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image load --daemon docker.io/kicbase/echo-server:functional-051000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-051000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-051000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image load --daemon docker.io/kicbase/echo-server:functional-051000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-051000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image save docker.io/kicbase/echo-server:functional-051000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-051000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-051000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-051000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.999125ms)

                                                
                                                
** stderr ** 
	error: context "functional-051000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-051000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 service list: exit status 83 (41.349459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-051000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-051000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-051000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 service list -o json: exit status 83 (40.777541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-051000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 service --namespace=default --https --url hello-node: exit status 83 (52.972708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-051000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 service hello-node --url --format={{.IP}}: exit status 83 (42.610708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-051000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-051000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-051000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 service hello-node --url: exit status 83 (39.86925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-051000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-051000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-051000"
functional_test.go:1565: failed to parse "* The control-plane node functional-051000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-051000\"": parse "* The control-plane node functional-051000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-051000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.029316125s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-851000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-851000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.868232958s)

                                                
                                                
-- stdout --
	* [ha-851000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-851000" primary control-plane node in "ha-851000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-851000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:46:37.663696   22211 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:46:37.663828   22211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:46:37.663831   22211 out.go:304] Setting ErrFile to fd 2...
	I0729 04:46:37.663834   22211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:46:37.663982   22211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:46:37.665072   22211 out.go:298] Setting JSON to false
	I0729 04:46:37.681265   22211 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9966,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:46:37.681340   22211 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:46:37.686831   22211 out.go:177] * [ha-851000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:46:37.694792   22211 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:46:37.694840   22211 notify.go:220] Checking for updates...
	I0729 04:46:37.700705   22211 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:46:37.703817   22211 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:46:37.706814   22211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:46:37.708098   22211 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:46:37.710792   22211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:46:37.713987   22211 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:46:37.717668   22211 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:46:37.724791   22211 start.go:297] selected driver: qemu2
	I0729 04:46:37.724800   22211 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:46:37.724808   22211 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:46:37.727002   22211 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:46:37.729854   22211 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:46:37.732941   22211 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:46:37.732975   22211 cni.go:84] Creating CNI manager for ""
	I0729 04:46:37.732980   22211 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 04:46:37.732985   22211 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 04:46:37.733025   22211 start.go:340] cluster config:
	{Name:ha-851000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-851000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:46:37.736665   22211 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:46:37.744726   22211 out.go:177] * Starting "ha-851000" primary control-plane node in "ha-851000" cluster
	I0729 04:46:37.748847   22211 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:46:37.748861   22211 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:46:37.748869   22211 cache.go:56] Caching tarball of preloaded images
	I0729 04:46:37.748921   22211 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:46:37.748931   22211 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:46:37.749117   22211 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/ha-851000/config.json ...
	I0729 04:46:37.749128   22211 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/ha-851000/config.json: {Name:mk7932dcddb57e151be6907c169c168b6ddfcb6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:46:37.749470   22211 start.go:360] acquireMachinesLock for ha-851000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:46:37.749506   22211 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "ha-851000"
	I0729 04:46:37.749518   22211 start.go:93] Provisioning new machine with config: &{Name:ha-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-851000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:46:37.749551   22211 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:46:37.757785   22211 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:46:37.775173   22211 start.go:159] libmachine.API.Create for "ha-851000" (driver="qemu2")
	I0729 04:46:37.775210   22211 client.go:168] LocalClient.Create starting
	I0729 04:46:37.775288   22211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 04:46:37.775321   22211 main.go:141] libmachine: Decoding PEM data...
	I0729 04:46:37.775331   22211 main.go:141] libmachine: Parsing certificate...
	I0729 04:46:37.775371   22211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 04:46:37.775400   22211 main.go:141] libmachine: Decoding PEM data...
	I0729 04:46:37.775411   22211 main.go:141] libmachine: Parsing certificate...
	I0729 04:46:37.775788   22211 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:46:37.928884   22211 main.go:141] libmachine: Creating SSH key...
	I0729 04:46:38.052612   22211 main.go:141] libmachine: Creating Disk image...
	I0729 04:46:38.052618   22211 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:46:38.052810   22211 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2
	I0729 04:46:38.062087   22211 main.go:141] libmachine: STDOUT: 
	I0729 04:46:38.062106   22211 main.go:141] libmachine: STDERR: 
	I0729 04:46:38.062151   22211 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2 +20000M
	I0729 04:46:38.069814   22211 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:46:38.069829   22211 main.go:141] libmachine: STDERR: 
	I0729 04:46:38.069850   22211 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2
	I0729 04:46:38.069854   22211 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:46:38.069865   22211 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:46:38.069897   22211 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:3d:ac:5e:ee:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2
	I0729 04:46:38.071532   22211 main.go:141] libmachine: STDOUT: 
	I0729 04:46:38.071547   22211 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:46:38.071564   22211 client.go:171] duration metric: took 296.356292ms to LocalClient.Create
	I0729 04:46:40.073807   22211 start.go:128] duration metric: took 2.324273416s to createHost
	I0729 04:46:40.073929   22211 start.go:83] releasing machines lock for "ha-851000", held for 2.324463917s
	W0729 04:46:40.073969   22211 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:46:40.086939   22211 out.go:177] * Deleting "ha-851000" in qemu2 ...
	W0729 04:46:40.113456   22211 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:46:40.113480   22211 start.go:729] Will try again in 5 seconds ...
	I0729 04:46:45.115640   22211 start.go:360] acquireMachinesLock for ha-851000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:46:45.116087   22211 start.go:364] duration metric: took 354.084µs to acquireMachinesLock for "ha-851000"
	I0729 04:46:45.116205   22211 start.go:93] Provisioning new machine with config: &{Name:ha-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-851000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:46:45.116522   22211 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:46:45.126153   22211 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:46:45.174692   22211 start.go:159] libmachine.API.Create for "ha-851000" (driver="qemu2")
	I0729 04:46:45.174739   22211 client.go:168] LocalClient.Create starting
	I0729 04:46:45.174839   22211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 04:46:45.174904   22211 main.go:141] libmachine: Decoding PEM data...
	I0729 04:46:45.174923   22211 main.go:141] libmachine: Parsing certificate...
	I0729 04:46:45.174980   22211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 04:46:45.175023   22211 main.go:141] libmachine: Decoding PEM data...
	I0729 04:46:45.175042   22211 main.go:141] libmachine: Parsing certificate...
	I0729 04:46:45.175616   22211 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:46:45.343838   22211 main.go:141] libmachine: Creating SSH key...
	I0729 04:46:45.436981   22211 main.go:141] libmachine: Creating Disk image...
	I0729 04:46:45.436989   22211 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:46:45.437177   22211 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2
	I0729 04:46:45.446634   22211 main.go:141] libmachine: STDOUT: 
	I0729 04:46:45.446666   22211 main.go:141] libmachine: STDERR: 
	I0729 04:46:45.446711   22211 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2 +20000M
	I0729 04:46:45.454580   22211 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:46:45.454595   22211 main.go:141] libmachine: STDERR: 
	I0729 04:46:45.454606   22211 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2
	I0729 04:46:45.454612   22211 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:46:45.454627   22211 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:46:45.454660   22211 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4d:00:76:33:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2
	I0729 04:46:45.456277   22211 main.go:141] libmachine: STDOUT: 
	I0729 04:46:45.456293   22211 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:46:45.456305   22211 client.go:171] duration metric: took 281.568ms to LocalClient.Create
	I0729 04:46:47.458434   22211 start.go:128] duration metric: took 2.341937042s to createHost
	I0729 04:46:47.458544   22211 start.go:83] releasing machines lock for "ha-851000", held for 2.342475084s
	W0729 04:46:47.458990   22211 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-851000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-851000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:46:47.469591   22211 out.go:177] 
	W0729 04:46:47.475676   22211 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:46:47.475699   22211 out.go:239] * 
	* 
	W0729 04:46:47.478240   22211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:46:47.488507   22211 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-851000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (67.857792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (76.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.409625ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-851000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- rollout status deployment/busybox: exit status 1 (55.834209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.944958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.390333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.366167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.505375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.884542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.159875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.00125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.317625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (78.228ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.691708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.482958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.033792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.613208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.002792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (30.173583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (76.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-851000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.265625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-851000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (29.634625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-851000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-851000 -v=7 --alsologtostderr: exit status 83 (41.864459ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-851000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-851000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:04.383258   22298 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:04.383843   22298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:04.383847   22298 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:04.383850   22298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:04.384053   22298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:04.384262   22298 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:04.384454   22298 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:04.389011   22298 out.go:177] * The control-plane node ha-851000 host is not running: state=Stopped
	I0729 04:48:04.392909   22298 out.go:177]   To start a cluster, run: "minikube start -p ha-851000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-851000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (30.507666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-851000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-851000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.408917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-851000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-851000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-851000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (30.440666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-851000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-851000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-851000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-851000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-851000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-851000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-851000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-851000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (30.172875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 status --output json -v=7 --alsologtostderr: exit status 7 (30.246208ms)

                                                
                                                
-- stdout --
	{"Name":"ha-851000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:04.591660   22310 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:04.591807   22310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:04.591810   22310 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:04.591812   22310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:04.591954   22310 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:04.592079   22310 out.go:298] Setting JSON to true
	I0729 04:48:04.592089   22310 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:04.592149   22310 notify.go:220] Checking for updates...
	I0729 04:48:04.592302   22310 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:04.592309   22310 status.go:255] checking status of ha-851000 ...
	I0729 04:48:04.592529   22310 status.go:330] ha-851000 host status = "Stopped" (err=<nil>)
	I0729 04:48:04.592533   22310 status.go:343] host is not running, skipping remaining checks
	I0729 04:48:04.592535   22310 status.go:257] ha-851000 status: &{Name:ha-851000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-851000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (30.270084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 node stop m02 -v=7 --alsologtostderr: exit status 85 (47.090458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:04.652397   22314 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:04.653042   22314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:04.653046   22314 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:04.653049   22314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:04.653198   22314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:04.653442   22314 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:04.653643   22314 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:04.656619   22314 out.go:177] 
	W0729 04:48:04.660661   22314 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0729 04:48:04.660666   22314 out.go:239] * 
	* 
	W0729 04:48:04.663446   22314 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:48:04.666759   22314 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-851000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr: exit status 7 (29.325ms)

                                                
                                                
-- stdout --
	ha-851000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:04.699113   22316 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:04.699315   22316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:04.699318   22316 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:04.699324   22316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:04.699451   22316 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:04.699560   22316 out.go:298] Setting JSON to false
	I0729 04:48:04.699569   22316 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:04.699634   22316 notify.go:220] Checking for updates...
	I0729 04:48:04.699791   22316 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:04.699798   22316 status.go:255] checking status of ha-851000 ...
	I0729 04:48:04.700027   22316 status.go:330] ha-851000 host status = "Stopped" (err=<nil>)
	I0729 04:48:04.700031   22316 status.go:343] host is not running, skipping remaining checks
	I0729 04:48:04.700033   22316 status.go:257] ha-851000 status: &{Name:ha-851000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr": ha-851000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr": ha-851000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr": ha-851000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr": ha-851000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (30.191084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-851000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-851000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-851000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-851000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (29.490875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 node start m02 -v=7 --alsologtostderr: exit status 85 (48.122041ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:04.835675   22325 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:04.836224   22325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:04.836227   22325 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:04.836230   22325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:04.836374   22325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:04.836567   22325 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:04.836752   22325 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:04.841712   22325 out.go:177] 
	W0729 04:48:04.844676   22325 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0729 04:48:04.844680   22325 out.go:239] * 
	* 
	W0729 04:48:04.847116   22325 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:48:04.850671   22325 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0729 04:48:04.835675   22325 out.go:291] Setting OutFile to fd 1 ...
I0729 04:48:04.836224   22325 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:48:04.836227   22325 out.go:304] Setting ErrFile to fd 2...
I0729 04:48:04.836230   22325 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:48:04.836374   22325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
I0729 04:48:04.836567   22325 mustload.go:65] Loading cluster: ha-851000
I0729 04:48:04.836752   22325 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:48:04.841712   22325 out.go:177] 
W0729 04:48:04.844676   22325 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0729 04:48:04.844680   22325 out.go:239] * 
* 
W0729 04:48:04.847116   22325 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 04:48:04.850671   22325 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-851000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr: exit status 7 (29.456ms)

                                                
                                                
-- stdout --
	ha-851000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:04.883357   22327 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:04.883500   22327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:04.883503   22327 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:04.883506   22327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:04.883623   22327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:04.883739   22327 out.go:298] Setting JSON to false
	I0729 04:48:04.883749   22327 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:04.883807   22327 notify.go:220] Checking for updates...
	I0729 04:48:04.883945   22327 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:04.883951   22327 status.go:255] checking status of ha-851000 ...
	I0729 04:48:04.884155   22327 status.go:330] ha-851000 host status = "Stopped" (err=<nil>)
	I0729 04:48:04.884159   22327 status.go:343] host is not running, skipping remaining checks
	I0729 04:48:04.884161   22327 status.go:257] ha-851000 status: &{Name:ha-851000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr: exit status 7 (72.215125ms)

                                                
                                                
-- stdout --
	ha-851000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:05.990392   22329 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:05.990653   22329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:05.990658   22329 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:05.990662   22329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:05.990851   22329 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:05.991024   22329 out.go:298] Setting JSON to false
	I0729 04:48:05.991039   22329 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:05.991084   22329 notify.go:220] Checking for updates...
	I0729 04:48:05.991339   22329 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:05.991349   22329 status.go:255] checking status of ha-851000 ...
	I0729 04:48:05.991676   22329 status.go:330] ha-851000 host status = "Stopped" (err=<nil>)
	I0729 04:48:05.991682   22329 status.go:343] host is not running, skipping remaining checks
	I0729 04:48:05.991685   22329 status.go:257] ha-851000 status: &{Name:ha-851000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr: exit status 7 (73.374875ms)

                                                
                                                
-- stdout --
	ha-851000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:07.301210   22331 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:07.301442   22331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:07.301447   22331 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:07.301450   22331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:07.301649   22331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:07.301827   22331 out.go:298] Setting JSON to false
	I0729 04:48:07.301849   22331 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:07.301888   22331 notify.go:220] Checking for updates...
	I0729 04:48:07.302112   22331 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:07.302121   22331 status.go:255] checking status of ha-851000 ...
	I0729 04:48:07.302428   22331 status.go:330] ha-851000 host status = "Stopped" (err=<nil>)
	I0729 04:48:07.302433   22331 status.go:343] host is not running, skipping remaining checks
	I0729 04:48:07.302436   22331 status.go:257] ha-851000 status: &{Name:ha-851000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr: exit status 7 (75.41825ms)

                                                
                                                
-- stdout --
	ha-851000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:09.859761   22333 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:09.859958   22333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:09.859962   22333 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:09.859965   22333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:09.860129   22333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:09.860282   22333 out.go:298] Setting JSON to false
	I0729 04:48:09.860294   22333 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:09.860337   22333 notify.go:220] Checking for updates...
	I0729 04:48:09.860559   22333 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:09.860568   22333 status.go:255] checking status of ha-851000 ...
	I0729 04:48:09.860866   22333 status.go:330] ha-851000 host status = "Stopped" (err=<nil>)
	I0729 04:48:09.860871   22333 status.go:343] host is not running, skipping remaining checks
	I0729 04:48:09.860874   22333 status.go:257] ha-851000 status: &{Name:ha-851000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr: exit status 7 (74.9495ms)

                                                
                                                
-- stdout --
	ha-851000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:14.336782   22335 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:14.336966   22335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:14.336970   22335 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:14.336973   22335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:14.337135   22335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:14.337293   22335 out.go:298] Setting JSON to false
	I0729 04:48:14.337305   22335 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:14.337347   22335 notify.go:220] Checking for updates...
	I0729 04:48:14.337560   22335 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:14.337569   22335 status.go:255] checking status of ha-851000 ...
	I0729 04:48:14.337865   22335 status.go:330] ha-851000 host status = "Stopped" (err=<nil>)
	I0729 04:48:14.337870   22335 status.go:343] host is not running, skipping remaining checks
	I0729 04:48:14.337873   22335 status.go:257] ha-851000 status: &{Name:ha-851000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr: exit status 7 (73.840292ms)

                                                
                                                
-- stdout --
	ha-851000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:18.966446   22340 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:18.966669   22340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:18.966673   22340 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:18.966676   22340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:18.966892   22340 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:18.967066   22340 out.go:298] Setting JSON to false
	I0729 04:48:18.967079   22340 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:18.967121   22340 notify.go:220] Checking for updates...
	I0729 04:48:18.967339   22340 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:18.967347   22340 status.go:255] checking status of ha-851000 ...
	I0729 04:48:18.967683   22340 status.go:330] ha-851000 host status = "Stopped" (err=<nil>)
	I0729 04:48:18.967688   22340 status.go:343] host is not running, skipping remaining checks
	I0729 04:48:18.967691   22340 status.go:257] ha-851000 status: &{Name:ha-851000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr: exit status 7 (74.417542ms)

                                                
                                                
-- stdout --
	ha-851000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:26.423940   22344 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:26.424456   22344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:26.424462   22344 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:26.424465   22344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:26.424768   22344 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:26.424985   22344 out.go:298] Setting JSON to false
	I0729 04:48:26.425042   22344 notify.go:220] Checking for updates...
	I0729 04:48:26.425088   22344 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:26.425598   22344 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:26.425609   22344 status.go:255] checking status of ha-851000 ...
	I0729 04:48:26.425883   22344 status.go:330] ha-851000 host status = "Stopped" (err=<nil>)
	I0729 04:48:26.425889   22344 status.go:343] host is not running, skipping remaining checks
	I0729 04:48:26.425892   22344 status.go:257] ha-851000 status: &{Name:ha-851000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr: exit status 7 (73.614625ms)

                                                
                                                
-- stdout --
	ha-851000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:39.339609   22346 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:39.339790   22346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:39.339794   22346 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:39.339796   22346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:39.339987   22346 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:39.340145   22346 out.go:298] Setting JSON to false
	I0729 04:48:39.340157   22346 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:39.340192   22346 notify.go:220] Checking for updates...
	I0729 04:48:39.340434   22346 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:39.340443   22346 status.go:255] checking status of ha-851000 ...
	I0729 04:48:39.340711   22346 status.go:330] ha-851000 host status = "Stopped" (err=<nil>)
	I0729 04:48:39.340716   22346 status.go:343] host is not running, skipping remaining checks
	I0729 04:48:39.340719   22346 status.go:257] ha-851000 status: &{Name:ha-851000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr: exit status 7 (76.694542ms)

                                                
                                                
-- stdout --
	ha-851000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:48.902487   22348 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:48.902681   22348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:48.902686   22348 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:48.902689   22348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:48.902846   22348 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:48.903018   22348 out.go:298] Setting JSON to false
	I0729 04:48:48.903031   22348 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:48.903070   22348 notify.go:220] Checking for updates...
	I0729 04:48:48.903300   22348 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:48.903308   22348 status.go:255] checking status of ha-851000 ...
	I0729 04:48:48.903607   22348 status.go:330] ha-851000 host status = "Stopped" (err=<nil>)
	I0729 04:48:48.903612   22348 status.go:343] host is not running, skipping remaining checks
	I0729 04:48:48.903615   22348 status.go:257] ha-851000 status: &{Name:ha-851000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (33.7025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (44.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-851000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-851000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-851000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-851000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-851000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-851000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-851000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-851000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (30.066375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-851000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-851000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-851000 -v=7 --alsologtostderr: (3.074965958s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-851000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-851000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.218905792s)

                                                
                                                
-- stdout --
	* [ha-851000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-851000" primary control-plane node in "ha-851000" cluster
	* Restarting existing qemu2 VM for "ha-851000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-851000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:52.186708   22377 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:52.186909   22377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:52.186913   22377 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:52.186917   22377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:52.187105   22377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:52.188592   22377 out.go:298] Setting JSON to false
	I0729 04:48:52.208012   22377 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10101,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:48:52.208083   22377 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:48:52.211783   22377 out.go:177] * [ha-851000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:48:52.219530   22377 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:48:52.219578   22377 notify.go:220] Checking for updates...
	I0729 04:48:52.225517   22377 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:48:52.228552   22377 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:48:52.230002   22377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:48:52.233555   22377 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:48:52.236591   22377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:48:52.239889   22377 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:52.239945   22377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:48:52.244445   22377 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:48:52.251566   22377 start.go:297] selected driver: qemu2
	I0729 04:48:52.251574   22377 start.go:901] validating driver "qemu2" against &{Name:ha-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-851000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:48:52.251646   22377 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:48:52.253915   22377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:48:52.253954   22377 cni.go:84] Creating CNI manager for ""
	I0729 04:48:52.253962   22377 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 04:48:52.254007   22377 start.go:340] cluster config:
	{Name:ha-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-851000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:48:52.257657   22377 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:48:52.266540   22377 out.go:177] * Starting "ha-851000" primary control-plane node in "ha-851000" cluster
	I0729 04:48:52.270524   22377 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:48:52.270542   22377 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:48:52.270556   22377 cache.go:56] Caching tarball of preloaded images
	I0729 04:48:52.270624   22377 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:48:52.270630   22377 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:48:52.270680   22377 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/ha-851000/config.json ...
	I0729 04:48:52.271144   22377 start.go:360] acquireMachinesLock for ha-851000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:48:52.271179   22377 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "ha-851000"
	I0729 04:48:52.271189   22377 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:48:52.271196   22377 fix.go:54] fixHost starting: 
	I0729 04:48:52.271316   22377 fix.go:112] recreateIfNeeded on ha-851000: state=Stopped err=<nil>
	W0729 04:48:52.271325   22377 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:48:52.279444   22377 out.go:177] * Restarting existing qemu2 VM for "ha-851000" ...
	I0729 04:48:52.283519   22377 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:48:52.283555   22377 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4d:00:76:33:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2
	I0729 04:48:52.285778   22377 main.go:141] libmachine: STDOUT: 
	I0729 04:48:52.285797   22377 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:48:52.285826   22377 fix.go:56] duration metric: took 14.630541ms for fixHost
	I0729 04:48:52.285830   22377 start.go:83] releasing machines lock for "ha-851000", held for 14.646917ms
	W0729 04:48:52.285836   22377 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:48:52.285881   22377 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:48:52.285886   22377 start.go:729] Will try again in 5 seconds ...
	I0729 04:48:57.287987   22377 start.go:360] acquireMachinesLock for ha-851000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:48:57.288363   22377 start.go:364] duration metric: took 291.459µs to acquireMachinesLock for "ha-851000"
	I0729 04:48:57.288502   22377 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:48:57.288522   22377 fix.go:54] fixHost starting: 
	I0729 04:48:57.289215   22377 fix.go:112] recreateIfNeeded on ha-851000: state=Stopped err=<nil>
	W0729 04:48:57.289240   22377 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:48:57.292564   22377 out.go:177] * Restarting existing qemu2 VM for "ha-851000" ...
	I0729 04:48:57.296593   22377 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:48:57.296823   22377 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4d:00:76:33:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2
	I0729 04:48:57.306313   22377 main.go:141] libmachine: STDOUT: 
	I0729 04:48:57.306381   22377 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:48:57.306452   22377 fix.go:56] duration metric: took 17.93225ms for fixHost
	I0729 04:48:57.306466   22377 start.go:83] releasing machines lock for "ha-851000", held for 18.084ms
	W0729 04:48:57.306645   22377 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-851000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-851000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:48:57.314590   22377 out.go:177] 
	W0729 04:48:57.318651   22377 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:48:57.318671   22377 out.go:239] * 
	* 
	W0729 04:48:57.321223   22377 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:48:57.326633   22377 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-851000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-851000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (32.377792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.939417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-851000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-851000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:57.467142   22391 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:57.467712   22391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:57.467716   22391 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:57.467718   22391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:57.467870   22391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:57.468099   22391 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:57.468293   22391 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:57.472640   22391 out.go:177] * The control-plane node ha-851000 host is not running: state=Stopped
	I0729 04:48:57.476703   22391 out.go:177]   To start a cluster, run: "minikube start -p ha-851000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-851000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr: exit status 7 (29.251833ms)

                                                
                                                
-- stdout --
	ha-851000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:48:57.508116   22393 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:48:57.508255   22393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:57.508259   22393 out.go:304] Setting ErrFile to fd 2...
	I0729 04:48:57.508261   22393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:48:57.508383   22393 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:48:57.508499   22393 out.go:298] Setting JSON to false
	I0729 04:48:57.508508   22393 mustload.go:65] Loading cluster: ha-851000
	I0729 04:48:57.508575   22393 notify.go:220] Checking for updates...
	I0729 04:48:57.508697   22393 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:48:57.508703   22393 status.go:255] checking status of ha-851000 ...
	I0729 04:48:57.508908   22393 status.go:330] ha-851000 host status = "Stopped" (err=<nil>)
	I0729 04:48:57.508911   22393 status.go:343] host is not running, skipping remaining checks
	I0729 04:48:57.508913   22393 status.go:257] ha-851000 status: &{Name:ha-851000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (29.362584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-851000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-851000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-851000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-851000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (29.0305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-851000 stop -v=7 --alsologtostderr: (2.867556084s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr: exit status 7 (66.893041ms)

                                                
                                                
-- stdout --
	ha-851000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:49:00.548237   22420 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:49:00.548440   22420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:49:00.548445   22420 out.go:304] Setting ErrFile to fd 2...
	I0729 04:49:00.548448   22420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:49:00.548610   22420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:49:00.548748   22420 out.go:298] Setting JSON to false
	I0729 04:49:00.548761   22420 mustload.go:65] Loading cluster: ha-851000
	I0729 04:49:00.548800   22420 notify.go:220] Checking for updates...
	I0729 04:49:00.549041   22420 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:49:00.549049   22420 status.go:255] checking status of ha-851000 ...
	I0729 04:49:00.549333   22420 status.go:330] ha-851000 host status = "Stopped" (err=<nil>)
	I0729 04:49:00.549337   22420 status.go:343] host is not running, skipping remaining checks
	I0729 04:49:00.549341   22420 status.go:257] ha-851000 status: &{Name:ha-851000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr": ha-851000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr": ha-851000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-851000 status -v=7 --alsologtostderr": ha-851000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (32.771458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-851000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-851000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.184270084s)

                                                
                                                
-- stdout --
	* [ha-851000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-851000" primary control-plane node in "ha-851000" cluster
	* Restarting existing qemu2 VM for "ha-851000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-851000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:49:00.611019   22424 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:49:00.611145   22424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:49:00.611148   22424 out.go:304] Setting ErrFile to fd 2...
	I0729 04:49:00.611150   22424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:49:00.611289   22424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:49:00.612281   22424 out.go:298] Setting JSON to false
	I0729 04:49:00.628657   22424 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10109,"bootTime":1722243631,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:49:00.628720   22424 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:49:00.633378   22424 out.go:177] * [ha-851000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:49:00.641309   22424 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:49:00.641377   22424 notify.go:220] Checking for updates...
	I0729 04:49:00.649337   22424 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:49:00.653293   22424 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:49:00.656300   22424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:49:00.659358   22424 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:49:00.662196   22424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:49:00.665515   22424 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:49:00.665785   22424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:49:00.669318   22424 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:49:00.676301   22424 start.go:297] selected driver: qemu2
	I0729 04:49:00.676310   22424 start.go:901] validating driver "qemu2" against &{Name:ha-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-851000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:49:00.676379   22424 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:49:00.678774   22424 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:49:00.678799   22424 cni.go:84] Creating CNI manager for ""
	I0729 04:49:00.678805   22424 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 04:49:00.678842   22424 start.go:340] cluster config:
	{Name:ha-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-851000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:49:00.682568   22424 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:49:00.689250   22424 out.go:177] * Starting "ha-851000" primary control-plane node in "ha-851000" cluster
	I0729 04:49:00.693289   22424 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:49:00.693306   22424 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:49:00.693319   22424 cache.go:56] Caching tarball of preloaded images
	I0729 04:49:00.693371   22424 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:49:00.693377   22424 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:49:00.693437   22424 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/ha-851000/config.json ...
	I0729 04:49:00.693896   22424 start.go:360] acquireMachinesLock for ha-851000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:49:00.693930   22424 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "ha-851000"
	I0729 04:49:00.693940   22424 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:49:00.693945   22424 fix.go:54] fixHost starting: 
	I0729 04:49:00.694065   22424 fix.go:112] recreateIfNeeded on ha-851000: state=Stopped err=<nil>
	W0729 04:49:00.694073   22424 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:49:00.705789   22424 out.go:177] * Restarting existing qemu2 VM for "ha-851000" ...
	I0729 04:49:00.709278   22424 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:49:00.709328   22424 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4d:00:76:33:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2
	I0729 04:49:00.711409   22424 main.go:141] libmachine: STDOUT: 
	I0729 04:49:00.711429   22424 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:49:00.711459   22424 fix.go:56] duration metric: took 17.513667ms for fixHost
	I0729 04:49:00.711463   22424 start.go:83] releasing machines lock for "ha-851000", held for 17.529ms
	W0729 04:49:00.711470   22424 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:49:00.711508   22424 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:49:00.711513   22424 start.go:729] Will try again in 5 seconds ...
	I0729 04:49:05.713595   22424 start.go:360] acquireMachinesLock for ha-851000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:49:05.714026   22424 start.go:364] duration metric: took 306.625µs to acquireMachinesLock for "ha-851000"
	I0729 04:49:05.714155   22424 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:49:05.714175   22424 fix.go:54] fixHost starting: 
	I0729 04:49:05.714829   22424 fix.go:112] recreateIfNeeded on ha-851000: state=Stopped err=<nil>
	W0729 04:49:05.714853   22424 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:49:05.719321   22424 out.go:177] * Restarting existing qemu2 VM for "ha-851000" ...
	I0729 04:49:05.723260   22424 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:49:05.723475   22424 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4d:00:76:33:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/ha-851000/disk.qcow2
	I0729 04:49:05.732112   22424 main.go:141] libmachine: STDOUT: 
	I0729 04:49:05.732170   22424 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:49:05.732235   22424 fix.go:56] duration metric: took 18.057416ms for fixHost
	I0729 04:49:05.732253   22424 start.go:83] releasing machines lock for "ha-851000", held for 18.20125ms
	W0729 04:49:05.732430   22424 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-851000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-851000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:49:05.739236   22424 out.go:177] 
	W0729 04:49:05.743277   22424 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:49:05.743314   22424 out.go:239] * 
	* 
	W0729 04:49:05.746040   22424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:49:05.754250   22424 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-851000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (67.39775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-851000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-851000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-851000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-851000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (30.142167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-851000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-851000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.551917ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-851000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-851000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:49:05.944547   22439 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:49:05.944698   22439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:49:05.944701   22439 out.go:304] Setting ErrFile to fd 2...
	I0729 04:49:05.944704   22439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:49:05.944834   22439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:49:05.945064   22439 mustload.go:65] Loading cluster: ha-851000
	I0729 04:49:05.945253   22439 config.go:182] Loaded profile config "ha-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:49:05.949124   22439 out.go:177] * The control-plane node ha-851000 host is not running: state=Stopped
	I0729 04:49:05.953284   22439 out.go:177]   To start a cluster, run: "minikube start -p ha-851000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-851000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (29.15975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-851000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-851000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-851000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-851000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-851000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-851000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-851000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-851000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-851000 -n ha-851000: exit status 7 (30ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-066000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-066000 --driver=qemu2 : exit status 80 (9.942708958s)

                                                
                                                
-- stdout --
	* [image-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-066000" primary control-plane node in "image-066000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-066000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-066000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-066000 -n image-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-066000 -n image-066000: exit status 7 (67.296291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.77s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-594000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-594000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.767649625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"707fcffa-ae73-4f3d-9a98-403e3e45a574","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-594000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4fd4948a-5fc2-46e3-8ebd-f6196eaf51ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19338"}}
	{"specversion":"1.0","id":"f1d0ba69-24ad-4f04-ae6e-ecc8e3790ef0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig"}}
	{"specversion":"1.0","id":"1b210ff6-0215-45e2-a7f8-375c01b59852","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a0b714c0-b342-4ba0-b313-6cf4ca945d7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"91b9ba67-3883-4684-b6b3-c2dd4e0b4451","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube"}}
	{"specversion":"1.0","id":"74c0c566-08f9-4c67-bfab-aad308c7c5da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0a1326e4-a8ec-40c2-a789-184158d885a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"27f96b9a-03e4-4176-ac71-7f9760fc9b05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"8a1a48ae-ac0b-4963-8dfe-376b7959555a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-594000\" primary control-plane node in \"json-output-594000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cae2639b-abb8-4839-afa4-4fedaee4a988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"f1d5f260-975b-4c69-a6ab-12a1ca408a6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-594000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"a52d87d4-1a84-4787-b811-e559fd5f3a61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"60a06a93-d900-4a1f-8508-c845ea37d391","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"8b292faf-8a5a-41f3-bc10-1f319319a487","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-594000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"acd391fc-77a5-4aab-9fa5-32b25e139917","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"69d8145b-26ad-42ab-b352-1cdcff256c86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-594000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-594000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-594000 --output=json --user=testUser: exit status 83 (79.704125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6692af68-74b5-4c25-a5a8-b727f79b3181","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-594000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"f1d565f2-457c-47e3-94ee-824635c3fe7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-594000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-594000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-594000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-594000 --output=json --user=testUser: exit status 83 (46.050459ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-594000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-594000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-594000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-594000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-876000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-876000 --driver=qemu2 : exit status 80 (9.840817s)

                                                
                                                
-- stdout --
	* [first-876000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-876000" primary control-plane node in "first-876000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-876000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-876000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-876000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 04:49:38.240062 -0700 PDT m=+409.072601251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-878000 -n second-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-878000 -n second-878000: exit status 85 (79.609417ms)

                                                
                                                
-- stdout --
	* Profile "second-878000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-878000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-878000" host is not running, skipping log retrieval (state="* Profile \"second-878000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-878000\"")
helpers_test.go:175: Cleaning up "second-878000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-878000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 04:49:38.429555 -0700 PDT m=+409.262098542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-876000 -n first-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-876000 -n first-876000: exit status 7 (30.090375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-876000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-876000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-876000
--- FAIL: TestMinikubeProfile (10.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-639000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-639000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.841116083s)

                                                
                                                
-- stdout --
	* [mount-start-1-639000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-639000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-639000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-639000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-639000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-639000 -n mount-start-1-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-639000 -n mount-start-1-639000: exit status 7 (71.281875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-639000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.91s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-623000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-623000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.998075084s)

                                                
                                                
-- stdout --
	* [multinode-623000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-623000" primary control-plane node in "multinode-623000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-623000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:49:48.654553   22580 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:49:48.654682   22580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:49:48.654690   22580 out.go:304] Setting ErrFile to fd 2...
	I0729 04:49:48.654693   22580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:49:48.654815   22580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:49:48.655846   22580 out.go:298] Setting JSON to false
	I0729 04:49:48.671862   22580 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10157,"bootTime":1722243631,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:49:48.671963   22580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:49:48.677688   22580 out.go:177] * [multinode-623000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:49:48.685642   22580 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:49:48.685679   22580 notify.go:220] Checking for updates...
	I0729 04:49:48.693596   22580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:49:48.696638   22580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:49:48.699611   22580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:49:48.702567   22580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:49:48.705649   22580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:49:48.708735   22580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:49:48.712569   22580 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:49:48.719624   22580 start.go:297] selected driver: qemu2
	I0729 04:49:48.719631   22580 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:49:48.719637   22580 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:49:48.722007   22580 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:49:48.725573   22580 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:49:48.728707   22580 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:49:48.728729   22580 cni.go:84] Creating CNI manager for ""
	I0729 04:49:48.728735   22580 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 04:49:48.728741   22580 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 04:49:48.728796   22580 start.go:340] cluster config:
	{Name:multinode-623000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:49:48.732555   22580 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:49:48.740600   22580 out.go:177] * Starting "multinode-623000" primary control-plane node in "multinode-623000" cluster
	I0729 04:49:48.744645   22580 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:49:48.744662   22580 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:49:48.744670   22580 cache.go:56] Caching tarball of preloaded images
	I0729 04:49:48.744739   22580 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:49:48.744745   22580 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:49:48.745013   22580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/multinode-623000/config.json ...
	I0729 04:49:48.745025   22580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/multinode-623000/config.json: {Name:mk390d064b37d64078288578d1996edb02f7ca4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:49:48.745255   22580 start.go:360] acquireMachinesLock for multinode-623000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:49:48.745291   22580 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "multinode-623000"
	I0729 04:49:48.745305   22580 start.go:93] Provisioning new machine with config: &{Name:multinode-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:49:48.745343   22580 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:49:48.754559   22580 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:49:48.772917   22580 start.go:159] libmachine.API.Create for "multinode-623000" (driver="qemu2")
	I0729 04:49:48.772948   22580 client.go:168] LocalClient.Create starting
	I0729 04:49:48.773024   22580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 04:49:48.773054   22580 main.go:141] libmachine: Decoding PEM data...
	I0729 04:49:48.773063   22580 main.go:141] libmachine: Parsing certificate...
	I0729 04:49:48.773104   22580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 04:49:48.773128   22580 main.go:141] libmachine: Decoding PEM data...
	I0729 04:49:48.773137   22580 main.go:141] libmachine: Parsing certificate...
	I0729 04:49:48.773483   22580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:49:48.927276   22580 main.go:141] libmachine: Creating SSH key...
	I0729 04:49:49.082938   22580 main.go:141] libmachine: Creating Disk image...
	I0729 04:49:49.082944   22580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:49:49.083136   22580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2
	I0729 04:49:49.092557   22580 main.go:141] libmachine: STDOUT: 
	I0729 04:49:49.092573   22580 main.go:141] libmachine: STDERR: 
	I0729 04:49:49.092629   22580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2 +20000M
	I0729 04:49:49.100578   22580 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:49:49.100593   22580 main.go:141] libmachine: STDERR: 
	I0729 04:49:49.100617   22580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2
	I0729 04:49:49.100621   22580 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:49:49.100635   22580 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:49:49.100659   22580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:53:7e:6b:5e:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2
	I0729 04:49:49.102229   22580 main.go:141] libmachine: STDOUT: 
	I0729 04:49:49.102244   22580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:49:49.102264   22580 client.go:171] duration metric: took 329.31975ms to LocalClient.Create
	I0729 04:49:51.104392   22580 start.go:128] duration metric: took 2.359081583s to createHost
	I0729 04:49:51.104492   22580 start.go:83] releasing machines lock for "multinode-623000", held for 2.359215292s
	W0729 04:49:51.104542   22580 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:49:51.113552   22580 out.go:177] * Deleting "multinode-623000" in qemu2 ...
	W0729 04:49:51.143339   22580 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:49:51.143366   22580 start.go:729] Will try again in 5 seconds ...
	I0729 04:49:56.145487   22580 start.go:360] acquireMachinesLock for multinode-623000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:49:56.145917   22580 start.go:364] duration metric: took 328.208µs to acquireMachinesLock for "multinode-623000"
	I0729 04:49:56.146032   22580 start.go:93] Provisioning new machine with config: &{Name:multinode-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:49:56.146355   22580 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:49:56.160188   22580 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:49:56.212232   22580 start.go:159] libmachine.API.Create for "multinode-623000" (driver="qemu2")
	I0729 04:49:56.212280   22580 client.go:168] LocalClient.Create starting
	I0729 04:49:56.212395   22580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 04:49:56.212467   22580 main.go:141] libmachine: Decoding PEM data...
	I0729 04:49:56.212486   22580 main.go:141] libmachine: Parsing certificate...
	I0729 04:49:56.212558   22580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 04:49:56.212607   22580 main.go:141] libmachine: Decoding PEM data...
	I0729 04:49:56.212621   22580 main.go:141] libmachine: Parsing certificate...
	I0729 04:49:56.213215   22580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:49:56.379513   22580 main.go:141] libmachine: Creating SSH key...
	I0729 04:49:56.557937   22580 main.go:141] libmachine: Creating Disk image...
	I0729 04:49:56.557943   22580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:49:56.558139   22580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2
	I0729 04:49:56.567888   22580 main.go:141] libmachine: STDOUT: 
	I0729 04:49:56.567916   22580 main.go:141] libmachine: STDERR: 
	I0729 04:49:56.567976   22580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2 +20000M
	I0729 04:49:56.575892   22580 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:49:56.575916   22580 main.go:141] libmachine: STDERR: 
	I0729 04:49:56.575927   22580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2
	I0729 04:49:56.575930   22580 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:49:56.575943   22580 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:49:56.575978   22580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:0e:ab:76:6b:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2
	I0729 04:49:56.577573   22580 main.go:141] libmachine: STDOUT: 
	I0729 04:49:56.577603   22580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:49:56.577616   22580 client.go:171] duration metric: took 365.339042ms to LocalClient.Create
	I0729 04:49:58.579739   22580 start.go:128] duration metric: took 2.433413584s to createHost
	I0729 04:49:58.579798   22580 start.go:83] releasing machines lock for "multinode-623000", held for 2.433914541s
	W0729 04:49:58.580201   22580 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-623000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-623000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:49:58.592813   22580 out.go:177] 
	W0729 04:49:58.596871   22580 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:49:58.596895   22580 out.go:239] * 
	* 
	W0729 04:49:58.599469   22580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:49:58.609734   22580 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-623000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (66.485958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (81.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.345ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-623000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- rollout status deployment/busybox: exit status 1 (56.062ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.418625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.796666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.946375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.592833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.975791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.863167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.348208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.82225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.544333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.289ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.83425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.930417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.640875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.406209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (29.558375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (81.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-623000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.091292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (30.533917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-623000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-623000 -v 3 --alsologtostderr: exit status 83 (43.391334ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-623000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-623000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:51:20.476188   22664 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:51:20.476343   22664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:20.476346   22664 out.go:304] Setting ErrFile to fd 2...
	I0729 04:51:20.476348   22664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:20.476477   22664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:51:20.476714   22664 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:51:20.476903   22664 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:51:20.482235   22664 out.go:177] * The control-plane node multinode-623000 host is not running: state=Stopped
	I0729 04:51:20.485912   22664 out.go:177]   To start a cluster, run: "minikube start -p multinode-623000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-623000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (30.424125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-623000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-623000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.165833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-623000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-623000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-623000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (30.528917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-623000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-623000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-623000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-623000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (29.323833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status --output json --alsologtostderr: exit status 7 (29.840042ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-623000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:51:20.680671   22676 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:51:20.680812   22676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:20.680815   22676 out.go:304] Setting ErrFile to fd 2...
	I0729 04:51:20.680818   22676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:20.680958   22676 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:51:20.681062   22676 out.go:298] Setting JSON to true
	I0729 04:51:20.681071   22676 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:51:20.681128   22676 notify.go:220] Checking for updates...
	I0729 04:51:20.681261   22676 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:51:20.681268   22676 status.go:255] checking status of multinode-623000 ...
	I0729 04:51:20.681491   22676 status.go:330] multinode-623000 host status = "Stopped" (err=<nil>)
	I0729 04:51:20.681496   22676 status.go:343] host is not running, skipping remaining checks
	I0729 04:51:20.681498   22676 status.go:257] multinode-623000 status: &{Name:multinode-623000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-623000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (29.920958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 node stop m03: exit status 85 (45.457959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-623000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status: exit status 7 (30.585791ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status --alsologtostderr: exit status 7 (29.946208ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:51:20.816878   22684 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:51:20.817030   22684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:20.817033   22684 out.go:304] Setting ErrFile to fd 2...
	I0729 04:51:20.817036   22684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:20.817160   22684 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:51:20.817298   22684 out.go:298] Setting JSON to false
	I0729 04:51:20.817308   22684 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:51:20.817343   22684 notify.go:220] Checking for updates...
	I0729 04:51:20.817503   22684 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:51:20.817509   22684 status.go:255] checking status of multinode-623000 ...
	I0729 04:51:20.817714   22684 status.go:330] multinode-623000 host status = "Stopped" (err=<nil>)
	I0729 04:51:20.817718   22684 status.go:343] host is not running, skipping remaining checks
	I0729 04:51:20.817720   22684 status.go:257] multinode-623000 status: &{Name:multinode-623000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-623000 status --alsologtostderr": multinode-623000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (29.852792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (58.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 node start m03 -v=7 --alsologtostderr: exit status 85 (48.005167ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:51:20.877934   22688 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:51:20.878328   22688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:20.878332   22688 out.go:304] Setting ErrFile to fd 2...
	I0729 04:51:20.878334   22688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:20.878511   22688 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:51:20.878734   22688 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:51:20.878922   22688 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:51:20.882457   22688 out.go:177] 
	W0729 04:51:20.886489   22688 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0729 04:51:20.886494   22688 out.go:239] * 
	* 
	W0729 04:51:20.888735   22688 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:51:20.892438   22688 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0729 04:51:20.877934   22688 out.go:291] Setting OutFile to fd 1 ...
I0729 04:51:20.878328   22688 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:51:20.878332   22688 out.go:304] Setting ErrFile to fd 2...
I0729 04:51:20.878334   22688 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:51:20.878511   22688 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
I0729 04:51:20.878734   22688 mustload.go:65] Loading cluster: multinode-623000
I0729 04:51:20.878922   22688 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:51:20.882457   22688 out.go:177] 
W0729 04:51:20.886489   22688 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0729 04:51:20.886494   22688 out.go:239] * 
* 
W0729 04:51:20.888735   22688 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 04:51:20.892438   22688 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-623000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr: exit status 7 (29.0925ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:51:20.924812   22690 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:51:20.924971   22690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:20.924974   22690 out.go:304] Setting ErrFile to fd 2...
	I0729 04:51:20.924984   22690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:20.925099   22690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:51:20.925220   22690 out.go:298] Setting JSON to false
	I0729 04:51:20.925229   22690 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:51:20.925289   22690 notify.go:220] Checking for updates...
	I0729 04:51:20.925425   22690 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:51:20.925432   22690 status.go:255] checking status of multinode-623000 ...
	I0729 04:51:20.925635   22690 status.go:330] multinode-623000 host status = "Stopped" (err=<nil>)
	I0729 04:51:20.925639   22690 status.go:343] host is not running, skipping remaining checks
	I0729 04:51:20.925642   22690 status.go:257] multinode-623000 status: &{Name:multinode-623000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr: exit status 7 (74.0755ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:51:21.706645   22692 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:51:21.706838   22692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:21.706842   22692 out.go:304] Setting ErrFile to fd 2...
	I0729 04:51:21.706845   22692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:21.707033   22692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:51:21.707195   22692 out.go:298] Setting JSON to false
	I0729 04:51:21.707206   22692 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:51:21.707242   22692 notify.go:220] Checking for updates...
	I0729 04:51:21.707437   22692 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:51:21.707446   22692 status.go:255] checking status of multinode-623000 ...
	I0729 04:51:21.707719   22692 status.go:330] multinode-623000 host status = "Stopped" (err=<nil>)
	I0729 04:51:21.707724   22692 status.go:343] host is not running, skipping remaining checks
	I0729 04:51:21.707727   22692 status.go:257] multinode-623000 status: &{Name:multinode-623000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr: exit status 7 (72.997458ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:51:23.034208   22694 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:51:23.034392   22694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:23.034396   22694 out.go:304] Setting ErrFile to fd 2...
	I0729 04:51:23.034399   22694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:23.034579   22694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:51:23.034738   22694 out.go:298] Setting JSON to false
	I0729 04:51:23.034751   22694 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:51:23.034792   22694 notify.go:220] Checking for updates...
	I0729 04:51:23.035017   22694 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:51:23.035028   22694 status.go:255] checking status of multinode-623000 ...
	I0729 04:51:23.035301   22694 status.go:330] multinode-623000 host status = "Stopped" (err=<nil>)
	I0729 04:51:23.035306   22694 status.go:343] host is not running, skipping remaining checks
	I0729 04:51:23.035309   22694 status.go:257] multinode-623000 status: &{Name:multinode-623000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr: exit status 7 (73.610916ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:51:25.725868   22696 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:51:25.726028   22696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:25.726032   22696 out.go:304] Setting ErrFile to fd 2...
	I0729 04:51:25.726039   22696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:25.726230   22696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:51:25.726386   22696 out.go:298] Setting JSON to false
	I0729 04:51:25.726398   22696 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:51:25.726441   22696 notify.go:220] Checking for updates...
	I0729 04:51:25.726644   22696 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:51:25.726653   22696 status.go:255] checking status of multinode-623000 ...
	I0729 04:51:25.726926   22696 status.go:330] multinode-623000 host status = "Stopped" (err=<nil>)
	I0729 04:51:25.726931   22696 status.go:343] host is not running, skipping remaining checks
	I0729 04:51:25.726934   22696 status.go:257] multinode-623000 status: &{Name:multinode-623000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr: exit status 7 (73.5775ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:51:27.768824   22698 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:51:27.769020   22698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:27.769024   22698 out.go:304] Setting ErrFile to fd 2...
	I0729 04:51:27.769027   22698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:27.769209   22698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:51:27.769378   22698 out.go:298] Setting JSON to false
	I0729 04:51:27.769393   22698 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:51:27.769435   22698 notify.go:220] Checking for updates...
	I0729 04:51:27.769682   22698 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:51:27.769691   22698 status.go:255] checking status of multinode-623000 ...
	I0729 04:51:27.770015   22698 status.go:330] multinode-623000 host status = "Stopped" (err=<nil>)
	I0729 04:51:27.770020   22698 status.go:343] host is not running, skipping remaining checks
	I0729 04:51:27.770023   22698 status.go:257] multinode-623000 status: &{Name:multinode-623000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr: exit status 7 (70.965417ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:51:35.173541   22701 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:51:35.173720   22701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:35.173725   22701 out.go:304] Setting ErrFile to fd 2...
	I0729 04:51:35.173734   22701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:35.173903   22701 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:51:35.174068   22701 out.go:298] Setting JSON to false
	I0729 04:51:35.174082   22701 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:51:35.174121   22701 notify.go:220] Checking for updates...
	I0729 04:51:35.174355   22701 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:51:35.174364   22701 status.go:255] checking status of multinode-623000 ...
	I0729 04:51:35.174677   22701 status.go:330] multinode-623000 host status = "Stopped" (err=<nil>)
	I0729 04:51:35.174682   22701 status.go:343] host is not running, skipping remaining checks
	I0729 04:51:35.174685   22701 status.go:257] multinode-623000 status: &{Name:multinode-623000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr: exit status 7 (72.859792ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:51:46.283685   22703 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:51:46.283918   22703 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:46.283923   22703 out.go:304] Setting ErrFile to fd 2...
	I0729 04:51:46.283927   22703 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:51:46.284116   22703 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:51:46.284295   22703 out.go:298] Setting JSON to false
	I0729 04:51:46.284316   22703 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:51:46.284363   22703 notify.go:220] Checking for updates...
	I0729 04:51:46.284606   22703 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:51:46.284616   22703 status.go:255] checking status of multinode-623000 ...
	I0729 04:51:46.284918   22703 status.go:330] multinode-623000 host status = "Stopped" (err=<nil>)
	I0729 04:51:46.284923   22703 status.go:343] host is not running, skipping remaining checks
	I0729 04:51:46.284926   22703 status.go:257] multinode-623000 status: &{Name:multinode-623000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr: exit status 7 (75.606042ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:52:02.272093   22706 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:52:02.272555   22706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:02.272562   22706 out.go:304] Setting ErrFile to fd 2...
	I0729 04:52:02.272566   22706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:02.272872   22706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:52:02.273104   22706 out.go:298] Setting JSON to false
	I0729 04:52:02.273119   22706 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:52:02.273224   22706 notify.go:220] Checking for updates...
	I0729 04:52:02.273718   22706 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:52:02.273732   22706 status.go:255] checking status of multinode-623000 ...
	I0729 04:52:02.273988   22706 status.go:330] multinode-623000 host status = "Stopped" (err=<nil>)
	I0729 04:52:02.273993   22706 status.go:343] host is not running, skipping remaining checks
	I0729 04:52:02.273996   22706 status.go:257] multinode-623000 status: &{Name:multinode-623000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr: exit status 7 (72.550625ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:52:19.240386   22713 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:52:19.240600   22713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:19.240604   22713 out.go:304] Setting ErrFile to fd 2...
	I0729 04:52:19.240607   22713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:19.240786   22713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:52:19.240963   22713 out.go:298] Setting JSON to false
	I0729 04:52:19.240980   22713 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:52:19.241013   22713 notify.go:220] Checking for updates...
	I0729 04:52:19.241252   22713 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:52:19.241261   22713 status.go:255] checking status of multinode-623000 ...
	I0729 04:52:19.241549   22713 status.go:330] multinode-623000 host status = "Stopped" (err=<nil>)
	I0729 04:52:19.241553   22713 status.go:343] host is not running, skipping remaining checks
	I0729 04:52:19.241556   22713 status.go:257] multinode-623000 status: &{Name:multinode-623000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-623000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (33.244167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (58.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-623000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-623000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-623000: (3.459050167s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-623000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-623000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.217151375s)

                                                
                                                
-- stdout --
	* [multinode-623000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-623000" primary control-plane node in "multinode-623000" cluster
	* Restarting existing qemu2 VM for "multinode-623000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-623000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:52:22.826617   22737 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:52:22.826784   22737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:22.826789   22737 out.go:304] Setting ErrFile to fd 2...
	I0729 04:52:22.826792   22737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:22.826943   22737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:52:22.828204   22737 out.go:298] Setting JSON to false
	I0729 04:52:22.847130   22737 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10311,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:52:22.847195   22737 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:52:22.851359   22737 out.go:177] * [multinode-623000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:52:22.858170   22737 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:52:22.858231   22737 notify.go:220] Checking for updates...
	I0729 04:52:22.866102   22737 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:52:22.869146   22737 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:52:22.872181   22737 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:52:22.875199   22737 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:52:22.878149   22737 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:52:22.881485   22737 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:52:22.881543   22737 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:52:22.886050   22737 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:52:22.893126   22737 start.go:297] selected driver: qemu2
	I0729 04:52:22.893135   22737 start.go:901] validating driver "qemu2" against &{Name:multinode-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:52:22.893209   22737 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:52:22.895798   22737 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:52:22.895835   22737 cni.go:84] Creating CNI manager for ""
	I0729 04:52:22.895840   22737 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 04:52:22.895885   22737 start.go:340] cluster config:
	{Name:multinode-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-623000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:52:22.899651   22737 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:52:22.907156   22737 out.go:177] * Starting "multinode-623000" primary control-plane node in "multinode-623000" cluster
	I0729 04:52:22.910129   22737 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:52:22.910152   22737 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:52:22.910166   22737 cache.go:56] Caching tarball of preloaded images
	I0729 04:52:22.910224   22737 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:52:22.910231   22737 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:52:22.910305   22737 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/multinode-623000/config.json ...
	I0729 04:52:22.910751   22737 start.go:360] acquireMachinesLock for multinode-623000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:52:22.910786   22737 start.go:364] duration metric: took 29.167µs to acquireMachinesLock for "multinode-623000"
	I0729 04:52:22.910797   22737 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:52:22.910803   22737 fix.go:54] fixHost starting: 
	I0729 04:52:22.910927   22737 fix.go:112] recreateIfNeeded on multinode-623000: state=Stopped err=<nil>
	W0729 04:52:22.910937   22737 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:52:22.918921   22737 out.go:177] * Restarting existing qemu2 VM for "multinode-623000" ...
	I0729 04:52:22.923123   22737 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:52:22.923162   22737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:0e:ab:76:6b:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2
	I0729 04:52:22.925278   22737 main.go:141] libmachine: STDOUT: 
	I0729 04:52:22.925298   22737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:52:22.925326   22737 fix.go:56] duration metric: took 14.522208ms for fixHost
	I0729 04:52:22.925331   22737 start.go:83] releasing machines lock for "multinode-623000", held for 14.54025ms
	W0729 04:52:22.925337   22737 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:52:22.925378   22737 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:52:22.925386   22737 start.go:729] Will try again in 5 seconds ...
	I0729 04:52:27.927475   22737 start.go:360] acquireMachinesLock for multinode-623000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:52:27.927840   22737 start.go:364] duration metric: took 270µs to acquireMachinesLock for "multinode-623000"
	I0729 04:52:27.927961   22737 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:52:27.927979   22737 fix.go:54] fixHost starting: 
	I0729 04:52:27.928639   22737 fix.go:112] recreateIfNeeded on multinode-623000: state=Stopped err=<nil>
	W0729 04:52:27.928665   22737 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:52:27.933121   22737 out.go:177] * Restarting existing qemu2 VM for "multinode-623000" ...
	I0729 04:52:27.937037   22737 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:52:27.937241   22737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:0e:ab:76:6b:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2
	I0729 04:52:27.946048   22737 main.go:141] libmachine: STDOUT: 
	I0729 04:52:27.946107   22737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:52:27.946163   22737 fix.go:56] duration metric: took 18.185083ms for fixHost
	I0729 04:52:27.946181   22737 start.go:83] releasing machines lock for "multinode-623000", held for 18.320708ms
	W0729 04:52:27.946318   22737 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-623000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-623000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:52:27.954948   22737 out.go:177] 
	W0729 04:52:27.959120   22737 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:52:27.959163   22737 out.go:239] * 
	* 
	W0729 04:52:27.961823   22737 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:52:27.970058   22737 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-623000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-623000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (34.032292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 node delete m03: exit status 83 (41.225375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-623000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-623000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-623000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status --alsologtostderr: exit status 7 (30.441834ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:52:28.158511   22751 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:52:28.158684   22751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:28.158687   22751 out.go:304] Setting ErrFile to fd 2...
	I0729 04:52:28.158689   22751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:28.158827   22751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:52:28.158956   22751 out.go:298] Setting JSON to false
	I0729 04:52:28.158970   22751 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:52:28.159021   22751 notify.go:220] Checking for updates...
	I0729 04:52:28.159160   22751 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:52:28.159167   22751 status.go:255] checking status of multinode-623000 ...
	I0729 04:52:28.159400   22751 status.go:330] multinode-623000 host status = "Stopped" (err=<nil>)
	I0729 04:52:28.159404   22751 status.go:343] host is not running, skipping remaining checks
	I0729 04:52:28.159406   22751 status.go:257] multinode-623000 status: &{Name:multinode-623000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-623000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (29.293667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-623000 stop: (3.124305375s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status: exit status 7 (65.192792ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-623000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-623000 status --alsologtostderr: exit status 7 (32.739375ms)

                                                
                                                
-- stdout --
	multinode-623000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:52:31.410665   22777 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:52:31.410814   22777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:31.410817   22777 out.go:304] Setting ErrFile to fd 2...
	I0729 04:52:31.410819   22777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:31.410951   22777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:52:31.411065   22777 out.go:298] Setting JSON to false
	I0729 04:52:31.411075   22777 mustload.go:65] Loading cluster: multinode-623000
	I0729 04:52:31.411134   22777 notify.go:220] Checking for updates...
	I0729 04:52:31.411285   22777 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:52:31.411291   22777 status.go:255] checking status of multinode-623000 ...
	I0729 04:52:31.411497   22777 status.go:330] multinode-623000 host status = "Stopped" (err=<nil>)
	I0729 04:52:31.411501   22777 status.go:343] host is not running, skipping remaining checks
	I0729 04:52:31.411503   22777 status.go:257] multinode-623000 status: &{Name:multinode-623000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-623000 status --alsologtostderr": multinode-623000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-623000 status --alsologtostderr": multinode-623000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (29.468125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-623000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-623000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.186501708s)

                                                
                                                
-- stdout --
	* [multinode-623000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-623000" primary control-plane node in "multinode-623000" cluster
	* Restarting existing qemu2 VM for "multinode-623000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-623000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:52:31.469557   22781 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:52:31.469676   22781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:31.469679   22781 out.go:304] Setting ErrFile to fd 2...
	I0729 04:52:31.469682   22781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:31.469797   22781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:52:31.470797   22781 out.go:298] Setting JSON to false
	I0729 04:52:31.486913   22781 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10320,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:52:31.486984   22781 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:52:31.492490   22781 out.go:177] * [multinode-623000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:52:31.500522   22781 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:52:31.500560   22781 notify.go:220] Checking for updates...
	I0729 04:52:31.507463   22781 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:52:31.511297   22781 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:52:31.515439   22781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:52:31.518450   22781 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:52:31.519850   22781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:52:31.522656   22781 config.go:182] Loaded profile config "multinode-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:52:31.522935   22781 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:52:31.527458   22781 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:52:31.532362   22781 start.go:297] selected driver: qemu2
	I0729 04:52:31.532371   22781 start.go:901] validating driver "qemu2" against &{Name:multinode-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:52:31.532439   22781 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:52:31.534744   22781 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:52:31.534767   22781 cni.go:84] Creating CNI manager for ""
	I0729 04:52:31.534772   22781 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 04:52:31.534827   22781 start.go:340] cluster config:
	{Name:multinode-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-623000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:52:31.538268   22781 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:52:31.546364   22781 out.go:177] * Starting "multinode-623000" primary control-plane node in "multinode-623000" cluster
	I0729 04:52:31.550399   22781 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:52:31.550414   22781 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:52:31.550424   22781 cache.go:56] Caching tarball of preloaded images
	I0729 04:52:31.550476   22781 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:52:31.550481   22781 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:52:31.550541   22781 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/multinode-623000/config.json ...
	I0729 04:52:31.550969   22781 start.go:360] acquireMachinesLock for multinode-623000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:52:31.550999   22781 start.go:364] duration metric: took 24.458µs to acquireMachinesLock for "multinode-623000"
	I0729 04:52:31.551008   22781 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:52:31.551012   22781 fix.go:54] fixHost starting: 
	I0729 04:52:31.551122   22781 fix.go:112] recreateIfNeeded on multinode-623000: state=Stopped err=<nil>
	W0729 04:52:31.551131   22781 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:52:31.559368   22781 out.go:177] * Restarting existing qemu2 VM for "multinode-623000" ...
	I0729 04:52:31.563446   22781 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:52:31.563480   22781 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:0e:ab:76:6b:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2
	I0729 04:52:31.565444   22781 main.go:141] libmachine: STDOUT: 
	I0729 04:52:31.565465   22781 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:52:31.565492   22781 fix.go:56] duration metric: took 14.479792ms for fixHost
	I0729 04:52:31.565496   22781 start.go:83] releasing machines lock for "multinode-623000", held for 14.49375ms
	W0729 04:52:31.565503   22781 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:52:31.565535   22781 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:52:31.565540   22781 start.go:729] Will try again in 5 seconds ...
	I0729 04:52:36.567582   22781 start.go:360] acquireMachinesLock for multinode-623000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:52:36.567921   22781 start.go:364] duration metric: took 255.25µs to acquireMachinesLock for "multinode-623000"
	I0729 04:52:36.568064   22781 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:52:36.568080   22781 fix.go:54] fixHost starting: 
	I0729 04:52:36.568759   22781 fix.go:112] recreateIfNeeded on multinode-623000: state=Stopped err=<nil>
	W0729 04:52:36.568789   22781 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:52:36.578127   22781 out.go:177] * Restarting existing qemu2 VM for "multinode-623000" ...
	I0729 04:52:36.582097   22781 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:52:36.582326   22781 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:0e:ab:76:6b:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/multinode-623000/disk.qcow2
	I0729 04:52:36.591203   22781 main.go:141] libmachine: STDOUT: 
	I0729 04:52:36.591267   22781 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:52:36.591339   22781 fix.go:56] duration metric: took 23.255708ms for fixHost
	I0729 04:52:36.591355   22781 start.go:83] releasing machines lock for "multinode-623000", held for 23.389333ms
	W0729 04:52:36.591510   22781 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-623000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-623000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:52:36.600102   22781 out.go:177] 
	W0729 04:52:36.604203   22781 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:52:36.604224   22781 out.go:239] * 
	* 
	W0729 04:52:36.606982   22781 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:52:36.614999   22781 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-623000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (71.452083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-623000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-623000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-623000-m01 --driver=qemu2 : exit status 80 (10.221276792s)

                                                
                                                
-- stdout --
	* [multinode-623000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-623000-m01" primary control-plane node in "multinode-623000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-623000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-623000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-623000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-623000-m02 --driver=qemu2 : exit status 80 (10.116239042s)

                                                
                                                
-- stdout --
	* [multinode-623000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-623000-m02" primary control-plane node in "multinode-623000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-623000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-623000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-623000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-623000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-623000: exit status 83 (83.559834ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-623000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-623000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-623000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-623000 -n multinode-623000: exit status 7 (30.090334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.57s)

                                                
                                    
x
+
TestPreload (9.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-521000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-521000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.821675708s)

                                                
                                                
-- stdout --
	* [test-preload-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-521000" primary control-plane node in "test-preload-521000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-521000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:52:57.401507   22833 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:52:57.401645   22833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:57.401651   22833 out.go:304] Setting ErrFile to fd 2...
	I0729 04:52:57.401654   22833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:52:57.401802   22833 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:52:57.402857   22833 out.go:298] Setting JSON to false
	I0729 04:52:57.418953   22833 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10346,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:52:57.419009   22833 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:52:57.425452   22833 out.go:177] * [test-preload-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:52:57.433442   22833 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:52:57.433484   22833 notify.go:220] Checking for updates...
	I0729 04:52:57.443383   22833 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:52:57.446451   22833 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:52:57.449299   22833 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:52:57.452411   22833 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:52:57.455432   22833 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:52:57.457117   22833 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:52:57.457172   22833 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:52:57.460448   22833 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:52:57.467253   22833 start.go:297] selected driver: qemu2
	I0729 04:52:57.467260   22833 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:52:57.467266   22833 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:52:57.469803   22833 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:52:57.474327   22833 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:52:57.477544   22833 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:52:57.477590   22833 cni.go:84] Creating CNI manager for ""
	I0729 04:52:57.477598   22833 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:52:57.477602   22833 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:52:57.477633   22833 start.go:340] cluster config:
	{Name:test-preload-521000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:52:57.481403   22833 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:52:57.490447   22833 out.go:177] * Starting "test-preload-521000" primary control-plane node in "test-preload-521000" cluster
	I0729 04:52:57.494523   22833 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0729 04:52:57.494619   22833 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/test-preload-521000/config.json ...
	I0729 04:52:57.494636   22833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/test-preload-521000/config.json: {Name:mk91e0aff9dc1d43cc86da7c5ed535561cf3fc46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:52:57.494646   22833 cache.go:107] acquiring lock: {Name:mk6e9d4699d4fea0baf71716dba43d2ecd2a3927 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:52:57.494669   22833 cache.go:107] acquiring lock: {Name:mked93435b00343083f311415a974a51f35eb5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:52:57.494679   22833 cache.go:107] acquiring lock: {Name:mka3c02fdeb235f47ec984022d18afc2fff6d73b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:52:57.494860   22833 cache.go:107] acquiring lock: {Name:mk4affbf7ccaa4aef7a11e6f6eb86936beb58105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:52:57.494880   22833 cache.go:107] acquiring lock: {Name:mk81c74935233baf804b27be694744754772a857 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:52:57.494961   22833 start.go:360] acquireMachinesLock for test-preload-521000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:52:57.495023   22833 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 04:52:57.495020   22833 cache.go:107] acquiring lock: {Name:mk4387df52a11ea17c55250fa44fcc8ac114de8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:52:57.495042   22833 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:52:57.495037   22833 start.go:364] duration metric: took 67.333µs to acquireMachinesLock for "test-preload-521000"
	I0729 04:52:57.494907   22833 cache.go:107] acquiring lock: {Name:mk357a2b0fb1585ea6ccac1d8a314b6979157b96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:52:57.494990   22833 cache.go:107] acquiring lock: {Name:mkc1d6fa35ed30c3f527e2ffc5c95c0c71ebade0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:52:57.495102   22833 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 04:52:57.495110   22833 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:52:57.495136   22833 start.go:93] Provisioning new machine with config: &{Name:test-preload-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:52:57.495269   22833 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 04:52:57.495284   22833 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:52:57.495262   22833 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:52:57.495297   22833 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 04:52:57.495302   22833 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 04:52:57.503303   22833 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:52:57.510550   22833 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:52:57.510649   22833 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:52:57.511452   22833 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 04:52:57.511623   22833 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 04:52:57.511655   22833 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:52:57.511646   22833 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 04:52:57.511701   22833 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 04:52:57.511714   22833 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 04:52:57.521985   22833 start.go:159] libmachine.API.Create for "test-preload-521000" (driver="qemu2")
	I0729 04:52:57.522000   22833 client.go:168] LocalClient.Create starting
	I0729 04:52:57.522071   22833 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 04:52:57.522101   22833 main.go:141] libmachine: Decoding PEM data...
	I0729 04:52:57.522128   22833 main.go:141] libmachine: Parsing certificate...
	I0729 04:52:57.522176   22833 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 04:52:57.522205   22833 main.go:141] libmachine: Decoding PEM data...
	I0729 04:52:57.522213   22833 main.go:141] libmachine: Parsing certificate...
	I0729 04:52:57.522609   22833 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:52:57.674608   22833 main.go:141] libmachine: Creating SSH key...
	I0729 04:52:57.783880   22833 main.go:141] libmachine: Creating Disk image...
	I0729 04:52:57.783897   22833 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:52:57.784140   22833 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/disk.qcow2
	I0729 04:52:57.794153   22833 main.go:141] libmachine: STDOUT: 
	I0729 04:52:57.794178   22833 main.go:141] libmachine: STDERR: 
	I0729 04:52:57.794232   22833 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/disk.qcow2 +20000M
	I0729 04:52:57.803234   22833 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:52:57.803247   22833 main.go:141] libmachine: STDERR: 
	I0729 04:52:57.803275   22833 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/disk.qcow2
	I0729 04:52:57.803278   22833 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:52:57.803290   22833 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:52:57.803315   22833 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:2c:1b:2d:2f:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/disk.qcow2
	I0729 04:52:57.805153   22833 main.go:141] libmachine: STDOUT: 
	I0729 04:52:57.805179   22833 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:52:57.805198   22833 client.go:171] duration metric: took 283.201416ms to LocalClient.Create
	W0729 04:52:57.888312   22833 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 04:52:57.888342   22833 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 04:52:57.909248   22833 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 04:52:57.921469   22833 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 04:52:57.934112   22833 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0729 04:52:57.989030   22833 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 04:52:58.026175   22833 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0729 04:52:58.027201   22833 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0729 04:52:58.064549   22833 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0729 04:52:58.064587   22833 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 569.785167ms
	I0729 04:52:58.064623   22833 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0729 04:52:58.268927   22833 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 04:52:58.269020   22833 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 04:52:58.462649   22833 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 04:52:58.462711   22833 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 968.08425ms
	I0729 04:52:58.462736   22833 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 04:52:59.805375   22833 start.go:128] duration metric: took 2.31011475s to createHost
	I0729 04:52:59.805428   22833 start.go:83] releasing machines lock for "test-preload-521000", held for 2.310410291s
	W0729 04:52:59.805479   22833 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:52:59.822670   22833 out.go:177] * Deleting "test-preload-521000" in qemu2 ...
	W0729 04:52:59.853299   22833 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:52:59.853325   22833 start.go:729] Will try again in 5 seconds ...
	I0729 04:53:00.038098   22833 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0729 04:53:00.038151   22833 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.543248542s
	I0729 04:53:00.038202   22833 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0729 04:53:00.149948   22833 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0729 04:53:00.150024   22833 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.655149375s
	I0729 04:53:00.150074   22833 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0729 04:53:02.147912   22833 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0729 04:53:02.147966   22833 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.653422125s
	I0729 04:53:02.147990   22833 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0729 04:53:02.546884   22833 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0729 04:53:02.546939   22833 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.052406s
	I0729 04:53:02.546963   22833 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0729 04:53:04.234585   22833 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0729 04:53:04.234654   22833 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.739901666s
	I0729 04:53:04.234679   22833 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0729 04:53:04.853368   22833 start.go:360] acquireMachinesLock for test-preload-521000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:53:04.853787   22833 start.go:364] duration metric: took 350.792µs to acquireMachinesLock for "test-preload-521000"
	I0729 04:53:04.853903   22833 start.go:93] Provisioning new machine with config: &{Name:test-preload-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:53:04.854084   22833 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:53:04.865618   22833 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:53:04.915691   22833 start.go:159] libmachine.API.Create for "test-preload-521000" (driver="qemu2")
	I0729 04:53:04.915739   22833 client.go:168] LocalClient.Create starting
	I0729 04:53:04.915842   22833 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 04:53:04.915906   22833 main.go:141] libmachine: Decoding PEM data...
	I0729 04:53:04.915925   22833 main.go:141] libmachine: Parsing certificate...
	I0729 04:53:04.915990   22833 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 04:53:04.916034   22833 main.go:141] libmachine: Decoding PEM data...
	I0729 04:53:04.916050   22833 main.go:141] libmachine: Parsing certificate...
	I0729 04:53:04.916520   22833 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:53:05.087871   22833 main.go:141] libmachine: Creating SSH key...
	I0729 04:53:05.129155   22833 main.go:141] libmachine: Creating Disk image...
	I0729 04:53:05.129166   22833 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:53:05.129331   22833 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/disk.qcow2
	I0729 04:53:05.138483   22833 main.go:141] libmachine: STDOUT: 
	I0729 04:53:05.138501   22833 main.go:141] libmachine: STDERR: 
	I0729 04:53:05.138544   22833 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/disk.qcow2 +20000M
	I0729 04:53:05.146580   22833 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:53:05.146606   22833 main.go:141] libmachine: STDERR: 
	I0729 04:53:05.146628   22833 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/disk.qcow2
	I0729 04:53:05.146643   22833 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:53:05.146657   22833 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:53:05.146702   22833 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:93:59:9f:16:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/test-preload-521000/disk.qcow2
	I0729 04:53:05.148440   22833 main.go:141] libmachine: STDOUT: 
	I0729 04:53:05.148454   22833 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:53:05.148467   22833 client.go:171] duration metric: took 232.729167ms to LocalClient.Create
	I0729 04:53:06.965673   22833 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0729 04:53:06.965728   22833 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.471107875s
	I0729 04:53:06.965748   22833 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0729 04:53:06.965784   22833 cache.go:87] Successfully saved all images to host disk.
	I0729 04:53:07.149915   22833 start.go:128] duration metric: took 2.295853625s to createHost
	I0729 04:53:07.149980   22833 start.go:83] releasing machines lock for "test-preload-521000", held for 2.296221083s
	W0729 04:53:07.150279   22833 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:53:07.159832   22833 out.go:177] 
	W0729 04:53:07.166902   22833 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:53:07.166937   22833 out.go:239] * 
	* 
	W0729 04:53:07.169449   22833 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:53:07.179866   22833 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-521000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-29 04:53:07.197223 -0700 PDT m=+618.034687084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-521000 -n test-preload-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-521000 -n test-preload-521000: exit status 7 (66.372667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-521000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-521000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-521000
--- FAIL: TestPreload (9.97s)

                                                
                                    
x
+
TestScheduledStopUnix (10.11s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-705000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-705000 --memory=2048 --driver=qemu2 : exit status 80 (9.956650625s)

                                                
                                                
-- stdout --
	* [scheduled-stop-705000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-705000" primary control-plane node in "scheduled-stop-705000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-705000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-705000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-705000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-705000" primary control-plane node in "scheduled-stop-705000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-705000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-705000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-29 04:53:17.299867 -0700 PDT m=+628.137568334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-705000 -n scheduled-stop-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-705000 -n scheduled-stop-705000: exit status 7 (68.019375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-705000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-705000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-705000
--- FAIL: TestScheduledStopUnix (10.11s)

                                                
                                    
x
+
TestSkaffold (12.93s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2155907338 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2155907338 version: (1.063983416s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-064000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-064000 --memory=2600 --driver=qemu2 : exit status 80 (9.876124958s)

                                                
                                                
-- stdout --
	* [skaffold-064000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-064000" primary control-plane node in "skaffold-064000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-064000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-064000" primary control-plane node in "skaffold-064000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-29 04:53:30.234282 -0700 PDT m=+641.072288751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-064000 -n skaffold-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-064000 -n skaffold-064000: exit status 7 (62.07325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-064000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-064000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-064000
--- FAIL: TestSkaffold (12.93s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (635.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2065455297 start -p running-upgrade-965000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2065455297 start -p running-upgrade-965000 --memory=2200 --vm-driver=qemu2 : (1m2.903749042s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-965000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-965000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m57.923165s)

                                                
                                                
-- stdout --
	* [running-upgrade-965000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-965000" primary control-plane node in "running-upgrade-965000" cluster
	* Updating the running qemu2 "running-upgrade-965000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:54:55.361731   23149 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:54:55.361876   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:54:55.361884   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:54:55.361887   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:54:55.362019   23149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:54:55.363094   23149 out.go:298] Setting JSON to false
	I0729 04:54:55.379762   23149 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10464,"bootTime":1722243631,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:54:55.379818   23149 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:54:55.384321   23149 out.go:177] * [running-upgrade-965000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:54:55.391330   23149 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:54:55.391392   23149 notify.go:220] Checking for updates...
	I0729 04:54:55.398304   23149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:54:55.402269   23149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:54:55.405367   23149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:54:55.408346   23149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:54:55.411317   23149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:54:55.414670   23149 config.go:182] Loaded profile config "running-upgrade-965000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:54:55.417236   23149 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 04:54:55.420299   23149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:54:55.424284   23149 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:54:55.431282   23149 start.go:297] selected driver: qemu2
	I0729 04:54:55.431287   23149 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54177 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:54:55.431334   23149 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:54:55.433597   23149 cni.go:84] Creating CNI manager for ""
	I0729 04:54:55.433615   23149 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:54:55.433638   23149 start.go:340] cluster config:
	{Name:running-upgrade-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54177 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:54:55.433701   23149 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:54:55.441278   23149 out.go:177] * Starting "running-upgrade-965000" primary control-plane node in "running-upgrade-965000" cluster
	I0729 04:54:55.445313   23149 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:54:55.445344   23149 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 04:54:55.445356   23149 cache.go:56] Caching tarball of preloaded images
	I0729 04:54:55.445431   23149 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:54:55.445439   23149 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 04:54:55.445492   23149 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/config.json ...
	I0729 04:54:55.445874   23149 start.go:360] acquireMachinesLock for running-upgrade-965000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:55:08.246786   23149 start.go:364] duration metric: took 12.801201s to acquireMachinesLock for "running-upgrade-965000"
	I0729 04:55:08.246809   23149 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:55:08.246817   23149 fix.go:54] fixHost starting: 
	I0729 04:55:08.247706   23149 fix.go:112] recreateIfNeeded on running-upgrade-965000: state=Running err=<nil>
	W0729 04:55:08.247716   23149 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:55:08.250835   23149 out.go:177] * Updating the running qemu2 "running-upgrade-965000" VM ...
	I0729 04:55:08.257614   23149 machine.go:94] provisionDockerMachine start ...
	I0729 04:55:08.257675   23149 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:08.257783   23149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 54112 <nil> <nil>}
	I0729 04:55:08.257788   23149 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 04:55:08.310152   23149 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-965000
	
	I0729 04:55:08.310171   23149 buildroot.go:166] provisioning hostname "running-upgrade-965000"
	I0729 04:55:08.310222   23149 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:08.310343   23149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 54112 <nil> <nil>}
	I0729 04:55:08.310350   23149 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-965000 && echo "running-upgrade-965000" | sudo tee /etc/hostname
	I0729 04:55:08.370795   23149 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-965000
	
	I0729 04:55:08.370857   23149 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:08.371043   23149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 54112 <nil> <nil>}
	I0729 04:55:08.371051   23149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-965000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-965000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-965000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 04:55:08.427374   23149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 04:55:08.427387   23149 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19338-21024/.minikube CaCertPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19338-21024/.minikube}
	I0729 04:55:08.427397   23149 buildroot.go:174] setting up certificates
	I0729 04:55:08.427402   23149 provision.go:84] configureAuth start
	I0729 04:55:08.427408   23149 provision.go:143] copyHostCerts
	I0729 04:55:08.427472   23149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19338-21024/.minikube/cert.pem, removing ...
	I0729 04:55:08.427479   23149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19338-21024/.minikube/cert.pem
	I0729 04:55:08.427600   23149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19338-21024/.minikube/cert.pem (1123 bytes)
	I0729 04:55:08.427778   23149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19338-21024/.minikube/key.pem, removing ...
	I0729 04:55:08.427781   23149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19338-21024/.minikube/key.pem
	I0729 04:55:08.427822   23149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19338-21024/.minikube/key.pem (1679 bytes)
	I0729 04:55:08.427943   23149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.pem, removing ...
	I0729 04:55:08.427946   23149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.pem
	I0729 04:55:08.427991   23149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.pem (1078 bytes)
	I0729 04:55:08.428074   23149 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-965000 san=[127.0.0.1 localhost minikube running-upgrade-965000]
	I0729 04:55:08.503177   23149 provision.go:177] copyRemoteCerts
	I0729 04:55:08.503219   23149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 04:55:08.503227   23149 sshutil.go:53] new ssh client: &{IP:localhost Port:54112 SSHKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/running-upgrade-965000/id_rsa Username:docker}
	I0729 04:55:08.531852   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 04:55:08.538960   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 04:55:08.546064   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 04:55:08.554221   23149 provision.go:87] duration metric: took 126.817625ms to configureAuth
	I0729 04:55:08.554232   23149 buildroot.go:189] setting minikube options for container-runtime
	I0729 04:55:08.554344   23149 config.go:182] Loaded profile config "running-upgrade-965000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:55:08.554375   23149 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:08.554472   23149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 54112 <nil> <nil>}
	I0729 04:55:08.554479   23149 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 04:55:08.604463   23149 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 04:55:08.604475   23149 buildroot.go:70] root file system type: tmpfs
	I0729 04:55:08.604550   23149 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 04:55:08.604611   23149 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:08.604727   23149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 54112 <nil> <nil>}
	I0729 04:55:08.604762   23149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 04:55:08.659348   23149 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 04:55:08.659383   23149 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:08.659506   23149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 54112 <nil> <nil>}
	I0729 04:55:08.659517   23149 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 04:55:08.713304   23149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 04:55:08.713315   23149 machine.go:97] duration metric: took 455.706667ms to provisionDockerMachine
	I0729 04:55:08.713321   23149 start.go:293] postStartSetup for "running-upgrade-965000" (driver="qemu2")
	I0729 04:55:08.713328   23149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 04:55:08.713378   23149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 04:55:08.713386   23149 sshutil.go:53] new ssh client: &{IP:localhost Port:54112 SSHKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/running-upgrade-965000/id_rsa Username:docker}
	I0729 04:55:08.741356   23149 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 04:55:08.742815   23149 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 04:55:08.742822   23149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19338-21024/.minikube/addons for local assets ...
	I0729 04:55:08.742906   23149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19338-21024/.minikube/files for local assets ...
	I0729 04:55:08.742988   23149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19338-21024/.minikube/files/etc/ssl/certs/215082.pem -> 215082.pem in /etc/ssl/certs
	I0729 04:55:08.743083   23149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 04:55:08.746918   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/files/etc/ssl/certs/215082.pem --> /etc/ssl/certs/215082.pem (1708 bytes)
	I0729 04:55:08.753772   23149 start.go:296] duration metric: took 40.444958ms for postStartSetup
	I0729 04:55:08.753800   23149 fix.go:56] duration metric: took 506.996417ms for fixHost
	I0729 04:55:08.753852   23149 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:08.753970   23149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 54112 <nil> <nil>}
	I0729 04:55:08.753976   23149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 04:55:08.805137   23149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722254109.065699868
	
	I0729 04:55:08.805149   23149 fix.go:216] guest clock: 1722254109.065699868
	I0729 04:55:08.805153   23149 fix.go:229] Guest: 2024-07-29 04:55:09.065699868 -0700 PDT Remote: 2024-07-29 04:55:08.753805 -0700 PDT m=+13.414348126 (delta=311.894868ms)
	I0729 04:55:08.805164   23149 fix.go:200] guest clock delta is within tolerance: 311.894868ms
	I0729 04:55:08.805167   23149 start.go:83] releasing machines lock for "running-upgrade-965000", held for 558.385208ms
	I0729 04:55:08.805238   23149 ssh_runner.go:195] Run: cat /version.json
	I0729 04:55:08.805247   23149 sshutil.go:53] new ssh client: &{IP:localhost Port:54112 SSHKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/running-upgrade-965000/id_rsa Username:docker}
	I0729 04:55:08.805268   23149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 04:55:08.805289   23149 sshutil.go:53] new ssh client: &{IP:localhost Port:54112 SSHKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/running-upgrade-965000/id_rsa Username:docker}
	W0729 04:55:08.805851   23149 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:54297->127.0.0.1:54112: write: broken pipe
	I0729 04:55:08.805871   23149 retry.go:31] will retry after 186.399574ms: ssh: handshake failed: write tcp 127.0.0.1:54297->127.0.0.1:54112: write: broken pipe
	W0729 04:55:08.831365   23149 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 04:55:08.831425   23149 ssh_runner.go:195] Run: systemctl --version
	I0729 04:55:08.833204   23149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 04:55:08.834967   23149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 04:55:08.834991   23149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 04:55:08.838066   23149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 04:55:08.842568   23149 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 04:55:08.842576   23149 start.go:495] detecting cgroup driver to use...
	I0729 04:55:08.842656   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:55:08.848035   23149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 04:55:08.850850   23149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 04:55:08.854209   23149 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 04:55:08.854245   23149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 04:55:08.857813   23149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:55:08.861269   23149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 04:55:08.864489   23149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:55:08.867474   23149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 04:55:08.870837   23149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 04:55:08.874595   23149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 04:55:08.877844   23149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 04:55:08.881321   23149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 04:55:08.884078   23149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 04:55:08.887062   23149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:55:08.976536   23149 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 04:55:08.986132   23149 start.go:495] detecting cgroup driver to use...
	I0729 04:55:08.986204   23149 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 04:55:08.992835   23149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:55:09.003347   23149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 04:55:09.019577   23149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:55:09.060051   23149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 04:55:09.065273   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:55:09.070581   23149 ssh_runner.go:195] Run: which cri-dockerd
	I0729 04:55:09.071818   23149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 04:55:09.074419   23149 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 04:55:09.079536   23149 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 04:55:09.185749   23149 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 04:55:09.301200   23149 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 04:55:09.301266   23149 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 04:55:09.309111   23149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:55:09.420104   23149 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:55:31.087601   23149 ssh_runner.go:235] Completed: sudo systemctl restart docker: (21.667990333s)
	I0729 04:55:31.087673   23149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 04:55:31.092190   23149 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 04:55:31.100695   23149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:55:31.105952   23149 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 04:55:31.185507   23149 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 04:55:31.270418   23149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:55:31.352636   23149 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 04:55:31.358784   23149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:55:31.363483   23149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:55:31.449088   23149 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 04:55:31.488686   23149 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 04:55:31.488761   23149 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 04:55:31.491004   23149 start.go:563] Will wait 60s for crictl version
	I0729 04:55:31.491053   23149 ssh_runner.go:195] Run: which crictl
	I0729 04:55:31.492569   23149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 04:55:31.504015   23149 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 04:55:31.504082   23149 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:55:31.517473   23149 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:55:31.532750   23149 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 04:55:31.532888   23149 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 04:55:31.534423   23149 kubeadm.go:883] updating cluster {Name:running-upgrade-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54177 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 04:55:31.534477   23149 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:55:31.534515   23149 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:55:31.544768   23149 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:55:31.544776   23149 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:55:31.544825   23149 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:55:31.547903   23149 ssh_runner.go:195] Run: which lz4
	I0729 04:55:31.549206   23149 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 04:55:31.550500   23149 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 04:55:31.550510   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 04:55:32.462132   23149 docker.go:649] duration metric: took 912.963417ms to copy over tarball
	I0729 04:55:32.462194   23149 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 04:55:33.733240   23149 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.271063833s)
	I0729 04:55:33.733255   23149 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 04:55:33.748805   23149 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:55:33.752029   23149 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 04:55:33.757612   23149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:55:33.840102   23149 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:55:34.062363   23149 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:55:34.077529   23149 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:55:34.077538   23149 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:55:34.077543   23149 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 04:55:34.081303   23149 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:55:34.083036   23149 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:55:34.085080   23149 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:55:34.085967   23149 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:55:34.087200   23149 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:55:34.087649   23149 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:55:34.088626   23149 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:55:34.089409   23149 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:55:34.089990   23149 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 04:55:34.090259   23149 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:55:34.091581   23149 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:55:34.091740   23149 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:55:34.092677   23149 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:55:34.092703   23149 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 04:55:34.093994   23149 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:55:34.094915   23149 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:55:34.463885   23149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:55:34.475162   23149 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 04:55:34.475189   23149 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:55:34.475243   23149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:55:34.485949   23149 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 04:55:34.497329   23149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	W0729 04:55:34.505699   23149 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 04:55:34.505828   23149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:55:34.509025   23149 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 04:55:34.509048   23149 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:55:34.509086   23149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:55:34.513750   23149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 04:55:34.521879   23149 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 04:55:34.521901   23149 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:55:34.521958   23149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:55:34.528120   23149 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 04:55:34.533957   23149 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 04:55:34.533978   23149 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:55:34.534039   23149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 04:55:34.540509   23149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 04:55:34.541583   23149 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 04:55:34.541680   23149 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:55:34.551609   23149 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 04:55:34.551709   23149 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 04:55:34.552332   23149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:55:34.552766   23149 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 04:55:34.552783   23149 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 04:55:34.552782   23149 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 04:55:34.552804   23149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 04:55:34.552812   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 04:55:34.554877   23149 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0729 04:55:34.554897   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0729 04:55:34.577823   23149 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 04:55:34.577851   23149 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:55:34.577920   23149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:55:34.586716   23149 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 04:55:34.586832   23149 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 04:55:34.587599   23149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:55:34.608122   23149 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 04:55:34.609909   23149 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 04:55:34.609929   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 04:55:34.649101   23149 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 04:55:34.649122   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 04:55:34.649891   23149 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 04:55:34.649922   23149 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:55:34.649979   23149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:55:34.719383   23149 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 04:55:34.719412   23149 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 04:55:34.719465   23149 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:55:34.719473   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0729 04:55:34.745512   23149 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 04:55:34.745636   23149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:55:34.816940   23149 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 04:55:34.817027   23149 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 04:55:34.817050   23149 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:55:34.817109   23149 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:55:34.934187   23149 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 04:55:34.934200   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0729 04:55:35.082813   23149 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 04:55:35.082847   23149 cache_images.go:92] duration metric: took 1.005322083s to LoadCachedImages
	W0729 04:55:35.082897   23149 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 04:55:35.082903   23149 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 04:55:35.082967   23149 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-965000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 04:55:35.083027   23149 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 04:55:35.097686   23149 cni.go:84] Creating CNI manager for ""
	I0729 04:55:35.097696   23149 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:55:35.097701   23149 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 04:55:35.097710   23149 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-965000 NodeName:running-upgrade-965000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 04:55:35.097769   23149 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-965000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 04:55:35.097822   23149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 04:55:35.100626   23149 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 04:55:35.100655   23149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 04:55:35.103837   23149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 04:55:35.108492   23149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 04:55:35.113735   23149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 04:55:35.121249   23149 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 04:55:35.122825   23149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:55:35.203020   23149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:55:35.207999   23149 certs.go:68] Setting up /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000 for IP: 10.0.2.15
	I0729 04:55:35.208005   23149 certs.go:194] generating shared ca certs ...
	I0729 04:55:35.208014   23149 certs.go:226] acquiring lock for ca certs: {Name:mkd0b73609ecd85c52105a2a4e4113a2c11cb5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:55:35.208153   23149 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.key
	I0729 04:55:35.208194   23149 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/proxy-client-ca.key
	I0729 04:55:35.208198   23149 certs.go:256] generating profile certs ...
	I0729 04:55:35.208256   23149 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/client.key
	I0729 04:55:35.208272   23149 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/apiserver.key.ba78e92e
	I0729 04:55:35.208282   23149 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/apiserver.crt.ba78e92e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 04:55:35.381011   23149 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/apiserver.crt.ba78e92e ...
	I0729 04:55:35.381024   23149 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/apiserver.crt.ba78e92e: {Name:mk933654b2edb1a3b9f82583e67afb52048e91e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:55:35.381328   23149 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/apiserver.key.ba78e92e ...
	I0729 04:55:35.381333   23149 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/apiserver.key.ba78e92e: {Name:mka987d38ae0edc6b0c248d6f6bfeefaf6369dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:55:35.381463   23149 certs.go:381] copying /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/apiserver.crt.ba78e92e -> /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/apiserver.crt
	I0729 04:55:35.381582   23149 certs.go:385] copying /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/apiserver.key.ba78e92e -> /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/apiserver.key
	I0729 04:55:35.381716   23149 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/proxy-client.key
	I0729 04:55:35.381841   23149 certs.go:484] found cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/21508.pem (1338 bytes)
	W0729 04:55:35.381863   23149 certs.go:480] ignoring /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/21508_empty.pem, impossibly tiny 0 bytes
	I0729 04:55:35.381869   23149 certs.go:484] found cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 04:55:35.381893   23149 certs.go:484] found cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem (1078 bytes)
	I0729 04:55:35.381915   23149 certs.go:484] found cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem (1123 bytes)
	I0729 04:55:35.381932   23149 certs.go:484] found cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/key.pem (1679 bytes)
	I0729 04:55:35.381971   23149 certs.go:484] found cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/files/etc/ssl/certs/215082.pem (1708 bytes)
	I0729 04:55:35.382330   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 04:55:35.390089   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 04:55:35.397362   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 04:55:35.404392   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 04:55:35.411793   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 04:55:35.419823   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 04:55:35.427257   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 04:55:35.435090   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 04:55:35.442737   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 04:55:35.450493   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/21508.pem --> /usr/share/ca-certificates/21508.pem (1338 bytes)
	I0729 04:55:35.457051   23149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/files/etc/ssl/certs/215082.pem --> /usr/share/ca-certificates/215082.pem (1708 bytes)
	I0729 04:55:35.464497   23149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 04:55:35.470315   23149 ssh_runner.go:195] Run: openssl version
	I0729 04:55:35.472564   23149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 04:55:35.477551   23149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:55:35.480011   23149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 11:54 /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:55:35.480054   23149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:55:35.482489   23149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 04:55:35.485895   23149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21508.pem && ln -fs /usr/share/ca-certificates/21508.pem /etc/ssl/certs/21508.pem"
	I0729 04:55:35.489137   23149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21508.pem
	I0729 04:55:35.490881   23149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:43 /usr/share/ca-certificates/21508.pem
	I0729 04:55:35.490907   23149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21508.pem
	I0729 04:55:35.492889   23149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21508.pem /etc/ssl/certs/51391683.0"
	I0729 04:55:35.496208   23149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215082.pem && ln -fs /usr/share/ca-certificates/215082.pem /etc/ssl/certs/215082.pem"
	I0729 04:55:35.499490   23149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215082.pem
	I0729 04:55:35.500950   23149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:43 /usr/share/ca-certificates/215082.pem
	I0729 04:55:35.500975   23149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215082.pem
	I0729 04:55:35.502924   23149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215082.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 04:55:35.505665   23149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 04:55:35.507088   23149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 04:55:35.508902   23149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 04:55:35.510761   23149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 04:55:35.513059   23149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 04:55:35.515503   23149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 04:55:35.517437   23149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 04:55:35.519282   23149 kubeadm.go:392] StartCluster: {Name:running-upgrade-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54177 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:55:35.519351   23149 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:55:35.530866   23149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 04:55:35.534215   23149 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 04:55:35.534220   23149 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 04:55:35.534244   23149 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 04:55:35.537306   23149 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:55:35.537596   23149 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-965000" does not appear in /Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:55:35.537695   23149 kubeconfig.go:62] /Users/jenkins/minikube-integration/19338-21024/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-965000" cluster setting kubeconfig missing "running-upgrade-965000" context setting]
	I0729 04:55:35.537919   23149 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/kubeconfig: {Name:mkedcfdd12fb07fdee08d71279d618976d6521b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:55:35.538385   23149 kapi.go:59] client config for running-upgrade-965000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/client.key", CAFile:"/Users/jenkins/minikube-integration/19338-21024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105fe4080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:55:35.538740   23149 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 04:55:35.541336   23149 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-965000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 04:55:35.541341   23149 kubeadm.go:1160] stopping kube-system containers ...
	I0729 04:55:35.541378   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:55:35.555802   23149 docker.go:483] Stopping containers: [bda36b3734a0 55440b9075da 18414397917e 11be061cd908 78a1a0c6f760 2f48303c006e c884ccbfc64f 6bb2601080f4 750aa4789620 ec98ea480a95 55fee86287f6 00c6a00fbe3b f0fe4a5562b3 8b5be371b95d 0829f67caac3 7750d2d75b08 5a2c0fb71d26 3283cfe0de54 bfd571278aa0 deb9aa52f607 99cffa047cea dfaddab1831c 93ae02d798ee 5018074dfc84 f160dfa6aa00]
	I0729 04:55:35.555864   23149 ssh_runner.go:195] Run: docker stop bda36b3734a0 55440b9075da 18414397917e 11be061cd908 78a1a0c6f760 2f48303c006e c884ccbfc64f 6bb2601080f4 750aa4789620 ec98ea480a95 55fee86287f6 00c6a00fbe3b f0fe4a5562b3 8b5be371b95d 0829f67caac3 7750d2d75b08 5a2c0fb71d26 3283cfe0de54 bfd571278aa0 deb9aa52f607 99cffa047cea dfaddab1831c 93ae02d798ee 5018074dfc84 f160dfa6aa00
	I0729 04:55:35.568950   23149 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 04:55:35.660129   23149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:55:35.664479   23149 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 29 11:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 29 11:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 29 11:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 29 11:54 /etc/kubernetes/scheduler.conf
	
	I0729 04:55:35.664526   23149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/admin.conf
	I0729 04:55:35.668150   23149 kubeadm.go:163] "https://control-plane.minikube.internal:54177" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:55:35.668190   23149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:55:35.671165   23149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/kubelet.conf
	I0729 04:55:35.673943   23149 kubeadm.go:163] "https://control-plane.minikube.internal:54177" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:55:35.673970   23149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:55:35.677133   23149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/controller-manager.conf
	I0729 04:55:35.680446   23149 kubeadm.go:163] "https://control-plane.minikube.internal:54177" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:55:35.680495   23149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:55:35.683932   23149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/scheduler.conf
	I0729 04:55:35.687239   23149 kubeadm.go:163] "https://control-plane.minikube.internal:54177" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:55:35.687263   23149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:55:35.690019   23149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:55:35.693155   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:55:35.735233   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:55:36.122383   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:55:36.400283   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:55:36.430429   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:55:36.454326   23149 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:55:36.454403   23149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:55:36.956487   23149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:55:37.455738   23149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:55:37.956479   23149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:55:38.456426   23149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:55:38.461085   23149 api_server.go:72] duration metric: took 2.006806375s to wait for apiserver process to appear ...
	I0729 04:55:38.461095   23149 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:55:38.461104   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:55:43.463136   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:55:43.463165   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:55:48.463315   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:55:48.463347   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:55:53.463581   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:55:53.463606   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:55:58.463910   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:55:58.463950   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:03.464387   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:03.464429   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:08.464988   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:08.465017   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:13.465823   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:13.465875   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:18.467206   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:18.467231   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:23.468834   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:23.468934   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:28.471078   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:28.471154   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:33.473509   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:33.473552   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:38.475761   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:38.475946   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:56:38.498329   23149 logs.go:276] 2 containers: [dbc7495e69bc 2f48303c006e]
	I0729 04:56:38.498451   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:56:38.513530   23149 logs.go:276] 2 containers: [8ba66d0bf7e0 11be061cd908]
	I0729 04:56:38.513615   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:56:38.526816   23149 logs.go:276] 2 containers: [6a0dd7f75f7f ec98ea480a95]
	I0729 04:56:38.526893   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:56:38.537919   23149 logs.go:276] 2 containers: [6d9ecfc5a083 deb9aa52f607]
	I0729 04:56:38.537987   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:56:38.548566   23149 logs.go:276] 2 containers: [a7b978433727 55fee86287f6]
	I0729 04:56:38.548643   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:56:38.559371   23149 logs.go:276] 2 containers: [d72161c2b884 55440b9075da]
	I0729 04:56:38.559441   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:56:38.569365   23149 logs.go:276] 0 containers: []
	W0729 04:56:38.569376   23149 logs.go:278] No container was found matching "kindnet"
	I0729 04:56:38.569438   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:56:38.580459   23149 logs.go:276] 2 containers: [3b462dca7963 c884ccbfc64f]
	I0729 04:56:38.580476   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 04:56:38.580481   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:56:38.585475   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:56:38.585485   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:56:38.674627   23149 logs.go:123] Gathering logs for kube-apiserver [dbc7495e69bc] ...
	I0729 04:56:38.674637   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbc7495e69bc"
	I0729 04:56:38.693945   23149 logs.go:123] Gathering logs for kube-apiserver [2f48303c006e] ...
	I0729 04:56:38.693956   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f48303c006e"
	I0729 04:56:38.734532   23149 logs.go:123] Gathering logs for kube-controller-manager [55440b9075da] ...
	I0729 04:56:38.734548   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55440b9075da"
	I0729 04:56:38.750791   23149 logs.go:123] Gathering logs for container status ...
	I0729 04:56:38.750802   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:56:38.763456   23149 logs.go:123] Gathering logs for etcd [8ba66d0bf7e0] ...
	I0729 04:56:38.763467   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba66d0bf7e0"
	I0729 04:56:38.781939   23149 logs.go:123] Gathering logs for coredns [ec98ea480a95] ...
	I0729 04:56:38.781949   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec98ea480a95"
	I0729 04:56:38.794687   23149 logs.go:123] Gathering logs for kube-scheduler [deb9aa52f607] ...
	I0729 04:56:38.794700   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 deb9aa52f607"
	I0729 04:56:38.810760   23149 logs.go:123] Gathering logs for storage-provisioner [c884ccbfc64f] ...
	I0729 04:56:38.810772   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c884ccbfc64f"
	I0729 04:56:38.822522   23149 logs.go:123] Gathering logs for Docker ...
	I0729 04:56:38.822533   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:56:38.848476   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 04:56:38.848484   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 04:56:38.857868   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:56:38.857966   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:56:38.890325   23149 logs.go:123] Gathering logs for etcd [11be061cd908] ...
	I0729 04:56:38.890333   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11be061cd908"
	I0729 04:56:38.905375   23149 logs.go:123] Gathering logs for kube-scheduler [6d9ecfc5a083] ...
	I0729 04:56:38.905386   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d9ecfc5a083"
	I0729 04:56:38.916855   23149 logs.go:123] Gathering logs for storage-provisioner [3b462dca7963] ...
	I0729 04:56:38.916866   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b462dca7963"
	I0729 04:56:38.927955   23149 logs.go:123] Gathering logs for coredns [6a0dd7f75f7f] ...
	I0729 04:56:38.927964   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0dd7f75f7f"
	I0729 04:56:38.938805   23149 logs.go:123] Gathering logs for kube-proxy [a7b978433727] ...
	I0729 04:56:38.938814   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7b978433727"
	I0729 04:56:38.950262   23149 logs.go:123] Gathering logs for kube-proxy [55fee86287f6] ...
	I0729 04:56:38.950272   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55fee86287f6"
	I0729 04:56:38.962151   23149 logs.go:123] Gathering logs for kube-controller-manager [d72161c2b884] ...
	I0729 04:56:38.962160   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72161c2b884"
	I0729 04:56:38.978649   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:56:38.978658   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 04:56:38.978683   23149 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 04:56:38.978690   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:56:38.978693   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:56:38.978697   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:56:38.978700   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:56:48.982611   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:53.984793   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:53.984975   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:56:54.008754   23149 logs.go:276] 2 containers: [dbc7495e69bc 2f48303c006e]
	I0729 04:56:54.008866   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:56:54.026052   23149 logs.go:276] 2 containers: [8ba66d0bf7e0 11be061cd908]
	I0729 04:56:54.026146   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:56:54.039022   23149 logs.go:276] 2 containers: [6a0dd7f75f7f ec98ea480a95]
	I0729 04:56:54.039107   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:56:54.050213   23149 logs.go:276] 2 containers: [6d9ecfc5a083 deb9aa52f607]
	I0729 04:56:54.050285   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:56:54.060246   23149 logs.go:276] 2 containers: [a7b978433727 55fee86287f6]
	I0729 04:56:54.060317   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:56:54.070302   23149 logs.go:276] 2 containers: [d72161c2b884 55440b9075da]
	I0729 04:56:54.070366   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:56:54.080336   23149 logs.go:276] 0 containers: []
	W0729 04:56:54.080354   23149 logs.go:278] No container was found matching "kindnet"
	I0729 04:56:54.080405   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:56:54.090781   23149 logs.go:276] 2 containers: [3b462dca7963 c884ccbfc64f]
	I0729 04:56:54.090798   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 04:56:54.090805   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 04:56:54.100673   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:56:54.100768   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:56:54.133133   23149 logs.go:123] Gathering logs for kube-apiserver [2f48303c006e] ...
	I0729 04:56:54.133140   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f48303c006e"
	I0729 04:56:54.170298   23149 logs.go:123] Gathering logs for coredns [6a0dd7f75f7f] ...
	I0729 04:56:54.170311   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0dd7f75f7f"
	I0729 04:56:54.183007   23149 logs.go:123] Gathering logs for kube-proxy [55fee86287f6] ...
	I0729 04:56:54.183017   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55fee86287f6"
	I0729 04:56:54.194741   23149 logs.go:123] Gathering logs for storage-provisioner [c884ccbfc64f] ...
	I0729 04:56:54.194756   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c884ccbfc64f"
	I0729 04:56:54.206123   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:56:54.206134   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:56:54.248221   23149 logs.go:123] Gathering logs for kube-scheduler [deb9aa52f607] ...
	I0729 04:56:54.248232   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 deb9aa52f607"
	I0729 04:56:54.266898   23149 logs.go:123] Gathering logs for kube-apiserver [dbc7495e69bc] ...
	I0729 04:56:54.266908   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbc7495e69bc"
	I0729 04:56:54.288280   23149 logs.go:123] Gathering logs for kube-scheduler [6d9ecfc5a083] ...
	I0729 04:56:54.288290   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d9ecfc5a083"
	I0729 04:56:54.300494   23149 logs.go:123] Gathering logs for kube-proxy [a7b978433727] ...
	I0729 04:56:54.300504   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7b978433727"
	I0729 04:56:54.312827   23149 logs.go:123] Gathering logs for kube-controller-manager [d72161c2b884] ...
	I0729 04:56:54.312838   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72161c2b884"
	I0729 04:56:54.333442   23149 logs.go:123] Gathering logs for kube-controller-manager [55440b9075da] ...
	I0729 04:56:54.333456   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55440b9075da"
	I0729 04:56:54.349213   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 04:56:54.349225   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:56:54.354009   23149 logs.go:123] Gathering logs for etcd [8ba66d0bf7e0] ...
	I0729 04:56:54.354015   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba66d0bf7e0"
	I0729 04:56:54.370466   23149 logs.go:123] Gathering logs for etcd [11be061cd908] ...
	I0729 04:56:54.370477   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11be061cd908"
	I0729 04:56:54.384638   23149 logs.go:123] Gathering logs for coredns [ec98ea480a95] ...
	I0729 04:56:54.384647   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec98ea480a95"
	I0729 04:56:54.395992   23149 logs.go:123] Gathering logs for storage-provisioner [3b462dca7963] ...
	I0729 04:56:54.396006   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b462dca7963"
	I0729 04:56:54.407356   23149 logs.go:123] Gathering logs for Docker ...
	I0729 04:56:54.407366   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:56:54.434517   23149 logs.go:123] Gathering logs for container status ...
	I0729 04:56:54.434525   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:56:54.446997   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:56:54.447007   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 04:56:54.447038   23149 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 04:56:54.447042   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:56:54.447047   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:56:54.447052   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:56:54.447055   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:57:04.450950   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:57:09.453319   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:57:09.453562   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:57:09.484541   23149 logs.go:276] 2 containers: [dbc7495e69bc 2f48303c006e]
	I0729 04:57:09.484673   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:57:09.502521   23149 logs.go:276] 2 containers: [8ba66d0bf7e0 11be061cd908]
	I0729 04:57:09.502619   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:57:09.516426   23149 logs.go:276] 2 containers: [6a0dd7f75f7f ec98ea480a95]
	I0729 04:57:09.516508   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:57:09.528272   23149 logs.go:276] 2 containers: [6d9ecfc5a083 deb9aa52f607]
	I0729 04:57:09.528345   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:57:09.538904   23149 logs.go:276] 2 containers: [a7b978433727 55fee86287f6]
	I0729 04:57:09.538971   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:57:09.549082   23149 logs.go:276] 2 containers: [d72161c2b884 55440b9075da]
	I0729 04:57:09.549155   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:57:09.563463   23149 logs.go:276] 0 containers: []
	W0729 04:57:09.563477   23149 logs.go:278] No container was found matching "kindnet"
	I0729 04:57:09.563532   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:57:09.574344   23149 logs.go:276] 2 containers: [3b462dca7963 c884ccbfc64f]
	I0729 04:57:09.574358   23149 logs.go:123] Gathering logs for etcd [8ba66d0bf7e0] ...
	I0729 04:57:09.574366   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba66d0bf7e0"
	I0729 04:57:09.588375   23149 logs.go:123] Gathering logs for storage-provisioner [c884ccbfc64f] ...
	I0729 04:57:09.588387   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c884ccbfc64f"
	I0729 04:57:09.599439   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 04:57:09.599449   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 04:57:09.610907   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:57:09.610998   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:57:09.643025   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 04:57:09.643030   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:57:09.647797   23149 logs.go:123] Gathering logs for container status ...
	I0729 04:57:09.647804   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:57:09.659403   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:57:09.659413   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:57:09.698777   23149 logs.go:123] Gathering logs for etcd [11be061cd908] ...
	I0729 04:57:09.698788   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11be061cd908"
	I0729 04:57:09.713442   23149 logs.go:123] Gathering logs for kube-proxy [a7b978433727] ...
	I0729 04:57:09.713454   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7b978433727"
	I0729 04:57:09.726836   23149 logs.go:123] Gathering logs for kube-proxy [55fee86287f6] ...
	I0729 04:57:09.726847   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55fee86287f6"
	I0729 04:57:09.739612   23149 logs.go:123] Gathering logs for storage-provisioner [3b462dca7963] ...
	I0729 04:57:09.739625   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b462dca7963"
	I0729 04:57:09.751376   23149 logs.go:123] Gathering logs for Docker ...
	I0729 04:57:09.751387   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:57:09.777436   23149 logs.go:123] Gathering logs for kube-apiserver [2f48303c006e] ...
	I0729 04:57:09.777446   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f48303c006e"
	I0729 04:57:09.818651   23149 logs.go:123] Gathering logs for kube-scheduler [deb9aa52f607] ...
	I0729 04:57:09.818661   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 deb9aa52f607"
	I0729 04:57:09.834572   23149 logs.go:123] Gathering logs for coredns [ec98ea480a95] ...
	I0729 04:57:09.834582   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec98ea480a95"
	I0729 04:57:09.846193   23149 logs.go:123] Gathering logs for kube-scheduler [6d9ecfc5a083] ...
	I0729 04:57:09.846202   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d9ecfc5a083"
	I0729 04:57:09.858646   23149 logs.go:123] Gathering logs for kube-controller-manager [d72161c2b884] ...
	I0729 04:57:09.858657   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72161c2b884"
	I0729 04:57:09.879785   23149 logs.go:123] Gathering logs for kube-controller-manager [55440b9075da] ...
	I0729 04:57:09.879796   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55440b9075da"
	I0729 04:57:09.895418   23149 logs.go:123] Gathering logs for kube-apiserver [dbc7495e69bc] ...
	I0729 04:57:09.895428   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbc7495e69bc"
	I0729 04:57:09.915787   23149 logs.go:123] Gathering logs for coredns [6a0dd7f75f7f] ...
	I0729 04:57:09.915799   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0dd7f75f7f"
	I0729 04:57:09.938021   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:57:09.938033   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 04:57:09.938062   23149 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 04:57:09.938066   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:57:09.938070   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:57:09.938075   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:57:09.938080   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:57:19.941990   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:57:24.944427   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:57:24.944561   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:57:24.971775   23149 logs.go:276] 2 containers: [dbc7495e69bc 2f48303c006e]
	I0729 04:57:24.971847   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:57:24.983613   23149 logs.go:276] 2 containers: [8ba66d0bf7e0 11be061cd908]
	I0729 04:57:24.983687   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:57:24.994980   23149 logs.go:276] 2 containers: [6a0dd7f75f7f ec98ea480a95]
	I0729 04:57:24.995048   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:57:25.005859   23149 logs.go:276] 2 containers: [6d9ecfc5a083 deb9aa52f607]
	I0729 04:57:25.005929   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:57:25.018534   23149 logs.go:276] 2 containers: [a7b978433727 55fee86287f6]
	I0729 04:57:25.018612   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:57:25.029816   23149 logs.go:276] 2 containers: [d72161c2b884 55440b9075da]
	I0729 04:57:25.029904   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:57:25.041029   23149 logs.go:276] 0 containers: []
	W0729 04:57:25.041041   23149 logs.go:278] No container was found matching "kindnet"
	I0729 04:57:25.041102   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:57:25.056022   23149 logs.go:276] 2 containers: [3b462dca7963 c884ccbfc64f]
	I0729 04:57:25.056040   23149 logs.go:123] Gathering logs for kube-scheduler [6d9ecfc5a083] ...
	I0729 04:57:25.056046   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d9ecfc5a083"
	I0729 04:57:25.068350   23149 logs.go:123] Gathering logs for kube-controller-manager [55440b9075da] ...
	I0729 04:57:25.068362   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55440b9075da"
	I0729 04:57:25.084143   23149 logs.go:123] Gathering logs for storage-provisioner [c884ccbfc64f] ...
	I0729 04:57:25.084156   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c884ccbfc64f"
	I0729 04:57:25.095148   23149 logs.go:123] Gathering logs for Docker ...
	I0729 04:57:25.095161   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:57:25.120105   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 04:57:25.120113   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:57:25.124447   23149 logs.go:123] Gathering logs for etcd [8ba66d0bf7e0] ...
	I0729 04:57:25.124453   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba66d0bf7e0"
	I0729 04:57:25.138546   23149 logs.go:123] Gathering logs for storage-provisioner [3b462dca7963] ...
	I0729 04:57:25.138556   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b462dca7963"
	I0729 04:57:25.149594   23149 logs.go:123] Gathering logs for kube-apiserver [2f48303c006e] ...
	I0729 04:57:25.149607   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f48303c006e"
	I0729 04:57:25.186004   23149 logs.go:123] Gathering logs for coredns [6a0dd7f75f7f] ...
	I0729 04:57:25.186016   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0dd7f75f7f"
	I0729 04:57:25.197319   23149 logs.go:123] Gathering logs for kube-scheduler [deb9aa52f607] ...
	I0729 04:57:25.197329   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 deb9aa52f607"
	I0729 04:57:25.217023   23149 logs.go:123] Gathering logs for kube-proxy [55fee86287f6] ...
	I0729 04:57:25.217036   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55fee86287f6"
	I0729 04:57:25.229591   23149 logs.go:123] Gathering logs for container status ...
	I0729 04:57:25.229603   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:57:25.241399   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 04:57:25.241411   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 04:57:25.250082   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:57:25.250174   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:57:25.283161   23149 logs.go:123] Gathering logs for kube-apiserver [dbc7495e69bc] ...
	I0729 04:57:25.283166   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbc7495e69bc"
	I0729 04:57:25.297109   23149 logs.go:123] Gathering logs for etcd [11be061cd908] ...
	I0729 04:57:25.297120   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11be061cd908"
	I0729 04:57:25.311200   23149 logs.go:123] Gathering logs for coredns [ec98ea480a95] ...
	I0729 04:57:25.311212   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec98ea480a95"
	I0729 04:57:25.322086   23149 logs.go:123] Gathering logs for kube-proxy [a7b978433727] ...
	I0729 04:57:25.322101   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7b978433727"
	I0729 04:57:25.334184   23149 logs.go:123] Gathering logs for kube-controller-manager [d72161c2b884] ...
	I0729 04:57:25.334195   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72161c2b884"
	I0729 04:57:25.350985   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:57:25.350996   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:57:25.391181   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:57:25.391193   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 04:57:25.391219   23149 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 04:57:25.391222   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:57:25.391227   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:57:25.391231   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:57:25.391234   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:57:35.405098   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:57:40.410404   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:57:40.410522   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:57:40.421280   23149 logs.go:276] 2 containers: [dbc7495e69bc 2f48303c006e]
	I0729 04:57:40.421357   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:57:40.431800   23149 logs.go:276] 2 containers: [8ba66d0bf7e0 11be061cd908]
	I0729 04:57:40.431873   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:57:40.442811   23149 logs.go:276] 2 containers: [6a0dd7f75f7f ec98ea480a95]
	I0729 04:57:40.442892   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:57:40.454651   23149 logs.go:276] 2 containers: [6d9ecfc5a083 deb9aa52f607]
	I0729 04:57:40.454715   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:57:40.465603   23149 logs.go:276] 2 containers: [a7b978433727 55fee86287f6]
	I0729 04:57:40.465667   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:57:40.479823   23149 logs.go:276] 2 containers: [d72161c2b884 55440b9075da]
	I0729 04:57:40.479890   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:57:40.490538   23149 logs.go:276] 0 containers: []
	W0729 04:57:40.490550   23149 logs.go:278] No container was found matching "kindnet"
	I0729 04:57:40.490604   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:57:40.501238   23149 logs.go:276] 2 containers: [3b462dca7963 c884ccbfc64f]
	I0729 04:57:40.501252   23149 logs.go:123] Gathering logs for kube-scheduler [deb9aa52f607] ...
	I0729 04:57:40.501259   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 deb9aa52f607"
	I0729 04:57:40.516351   23149 logs.go:123] Gathering logs for kube-proxy [55fee86287f6] ...
	I0729 04:57:40.516364   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55fee86287f6"
	I0729 04:57:40.527697   23149 logs.go:123] Gathering logs for kube-controller-manager [d72161c2b884] ...
	I0729 04:57:40.527707   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72161c2b884"
	I0729 04:57:40.546492   23149 logs.go:123] Gathering logs for Docker ...
	I0729 04:57:40.546501   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:57:40.570115   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 04:57:40.570122   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 04:57:40.580763   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:57:40.580855   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:57:40.613099   23149 logs.go:123] Gathering logs for kube-apiserver [dbc7495e69bc] ...
	I0729 04:57:40.613105   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbc7495e69bc"
	I0729 04:57:40.626732   23149 logs.go:123] Gathering logs for kube-apiserver [2f48303c006e] ...
	I0729 04:57:40.626742   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f48303c006e"
	I0729 04:57:40.665882   23149 logs.go:123] Gathering logs for container status ...
	I0729 04:57:40.665893   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:57:40.677782   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 04:57:40.677797   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:57:40.681957   23149 logs.go:123] Gathering logs for coredns [ec98ea480a95] ...
	I0729 04:57:40.681963   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec98ea480a95"
	I0729 04:57:40.694986   23149 logs.go:123] Gathering logs for storage-provisioner [3b462dca7963] ...
	I0729 04:57:40.695000   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b462dca7963"
	I0729 04:57:40.708874   23149 logs.go:123] Gathering logs for kube-proxy [a7b978433727] ...
	I0729 04:57:40.708884   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7b978433727"
	I0729 04:57:40.720896   23149 logs.go:123] Gathering logs for storage-provisioner [c884ccbfc64f] ...
	I0729 04:57:40.720910   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c884ccbfc64f"
	I0729 04:57:40.732042   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:57:40.732053   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:57:40.769538   23149 logs.go:123] Gathering logs for etcd [11be061cd908] ...
	I0729 04:57:40.769550   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11be061cd908"
	I0729 04:57:40.793081   23149 logs.go:123] Gathering logs for kube-scheduler [6d9ecfc5a083] ...
	I0729 04:57:40.793094   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d9ecfc5a083"
	I0729 04:57:40.804757   23149 logs.go:123] Gathering logs for etcd [8ba66d0bf7e0] ...
	I0729 04:57:40.804772   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba66d0bf7e0"
	I0729 04:57:40.818522   23149 logs.go:123] Gathering logs for coredns [6a0dd7f75f7f] ...
	I0729 04:57:40.818531   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0dd7f75f7f"
	I0729 04:57:40.830022   23149 logs.go:123] Gathering logs for kube-controller-manager [55440b9075da] ...
	I0729 04:57:40.830034   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55440b9075da"
	I0729 04:57:40.846371   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:57:40.846386   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 04:57:40.846417   23149 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 04:57:40.846422   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:57:40.846425   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:57:40.846459   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:57:40.846466   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:57:50.854077   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:57:55.857434   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:57:55.857670   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:57:55.876644   23149 logs.go:276] 2 containers: [dbc7495e69bc 2f48303c006e]
	I0729 04:57:55.876740   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:57:55.890170   23149 logs.go:276] 2 containers: [8ba66d0bf7e0 11be061cd908]
	I0729 04:57:55.890248   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:57:55.901999   23149 logs.go:276] 2 containers: [6a0dd7f75f7f ec98ea480a95]
	I0729 04:57:55.902077   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:57:55.912772   23149 logs.go:276] 2 containers: [6d9ecfc5a083 deb9aa52f607]
	I0729 04:57:55.912835   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:57:55.923137   23149 logs.go:276] 2 containers: [a7b978433727 55fee86287f6]
	I0729 04:57:55.923195   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:57:55.933713   23149 logs.go:276] 2 containers: [d72161c2b884 55440b9075da]
	I0729 04:57:55.933810   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:57:55.944039   23149 logs.go:276] 0 containers: []
	W0729 04:57:55.944052   23149 logs.go:278] No container was found matching "kindnet"
	I0729 04:57:55.944113   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:57:55.958786   23149 logs.go:276] 2 containers: [3b462dca7963 c884ccbfc64f]
	I0729 04:57:55.958808   23149 logs.go:123] Gathering logs for storage-provisioner [3b462dca7963] ...
	I0729 04:57:55.958813   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b462dca7963"
	I0729 04:57:55.971834   23149 logs.go:123] Gathering logs for container status ...
	I0729 04:57:55.971845   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:57:55.984203   23149 logs.go:123] Gathering logs for kube-apiserver [2f48303c006e] ...
	I0729 04:57:55.984215   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f48303c006e"
	I0729 04:57:56.022603   23149 logs.go:123] Gathering logs for etcd [11be061cd908] ...
	I0729 04:57:56.022616   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11be061cd908"
	I0729 04:57:56.037082   23149 logs.go:123] Gathering logs for coredns [6a0dd7f75f7f] ...
	I0729 04:57:56.037093   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0dd7f75f7f"
	I0729 04:57:56.048475   23149 logs.go:123] Gathering logs for coredns [ec98ea480a95] ...
	I0729 04:57:56.048485   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec98ea480a95"
	I0729 04:57:56.059855   23149 logs.go:123] Gathering logs for kube-controller-manager [d72161c2b884] ...
	I0729 04:57:56.059865   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72161c2b884"
	I0729 04:57:56.080248   23149 logs.go:123] Gathering logs for kube-controller-manager [55440b9075da] ...
	I0729 04:57:56.080257   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55440b9075da"
	I0729 04:57:56.095428   23149 logs.go:123] Gathering logs for Docker ...
	I0729 04:57:56.095438   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:57:56.119188   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 04:57:56.119196   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 04:57:56.127296   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:57:56.127386   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:57:56.159698   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 04:57:56.159705   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:57:56.163898   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:57:56.163907   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:57:56.200815   23149 logs.go:123] Gathering logs for kube-apiserver [dbc7495e69bc] ...
	I0729 04:57:56.200828   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbc7495e69bc"
	I0729 04:57:56.214949   23149 logs.go:123] Gathering logs for kube-scheduler [6d9ecfc5a083] ...
	I0729 04:57:56.214960   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d9ecfc5a083"
	I0729 04:57:56.226504   23149 logs.go:123] Gathering logs for storage-provisioner [c884ccbfc64f] ...
	I0729 04:57:56.226516   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c884ccbfc64f"
	I0729 04:57:56.238354   23149 logs.go:123] Gathering logs for etcd [8ba66d0bf7e0] ...
	I0729 04:57:56.238365   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba66d0bf7e0"
	I0729 04:57:56.253010   23149 logs.go:123] Gathering logs for kube-scheduler [deb9aa52f607] ...
	I0729 04:57:56.253020   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 deb9aa52f607"
	I0729 04:57:56.270542   23149 logs.go:123] Gathering logs for kube-proxy [a7b978433727] ...
	I0729 04:57:56.270555   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7b978433727"
	I0729 04:57:56.282230   23149 logs.go:123] Gathering logs for kube-proxy [55fee86287f6] ...
	I0729 04:57:56.282244   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55fee86287f6"
	I0729 04:57:56.296074   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:57:56.296084   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 04:57:56.296112   23149 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 04:57:56.296116   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:57:56.296119   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:57:56.296125   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:57:56.296129   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:58:06.301510   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:58:11.304602   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:58:11.304952   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:58:11.333897   23149 logs.go:276] 2 containers: [dbc7495e69bc 2f48303c006e]
	I0729 04:58:11.334025   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:58:11.356858   23149 logs.go:276] 2 containers: [8ba66d0bf7e0 11be061cd908]
	I0729 04:58:11.356941   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:58:11.374831   23149 logs.go:276] 2 containers: [6a0dd7f75f7f ec98ea480a95]
	I0729 04:58:11.374904   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:58:11.387986   23149 logs.go:276] 2 containers: [6d9ecfc5a083 deb9aa52f607]
	I0729 04:58:11.388055   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:58:11.398941   23149 logs.go:276] 2 containers: [a7b978433727 55fee86287f6]
	I0729 04:58:11.399010   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:58:11.410052   23149 logs.go:276] 2 containers: [d72161c2b884 55440b9075da]
	I0729 04:58:11.410123   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:58:11.425617   23149 logs.go:276] 0 containers: []
	W0729 04:58:11.425633   23149 logs.go:278] No container was found matching "kindnet"
	I0729 04:58:11.425694   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:58:11.436061   23149 logs.go:276] 2 containers: [3b462dca7963 c884ccbfc64f]
	I0729 04:58:11.436079   23149 logs.go:123] Gathering logs for kube-apiserver [2f48303c006e] ...
	I0729 04:58:11.436084   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f48303c006e"
	I0729 04:58:11.473424   23149 logs.go:123] Gathering logs for etcd [8ba66d0bf7e0] ...
	I0729 04:58:11.473434   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba66d0bf7e0"
	I0729 04:58:11.488201   23149 logs.go:123] Gathering logs for kube-scheduler [deb9aa52f607] ...
	I0729 04:58:11.488211   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 deb9aa52f607"
	I0729 04:58:11.503445   23149 logs.go:123] Gathering logs for storage-provisioner [3b462dca7963] ...
	I0729 04:58:11.503455   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b462dca7963"
	I0729 04:58:11.515090   23149 logs.go:123] Gathering logs for storage-provisioner [c884ccbfc64f] ...
	I0729 04:58:11.515100   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c884ccbfc64f"
	I0729 04:58:11.526387   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 04:58:11.526397   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 04:58:11.535394   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:58:11.535486   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:58:11.567340   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:58:11.567347   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:58:11.602527   23149 logs.go:123] Gathering logs for etcd [11be061cd908] ...
	I0729 04:58:11.602538   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11be061cd908"
	I0729 04:58:11.617902   23149 logs.go:123] Gathering logs for coredns [6a0dd7f75f7f] ...
	I0729 04:58:11.617913   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0dd7f75f7f"
	I0729 04:58:11.629135   23149 logs.go:123] Gathering logs for kube-controller-manager [d72161c2b884] ...
	I0729 04:58:11.629146   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72161c2b884"
	I0729 04:58:11.647417   23149 logs.go:123] Gathering logs for Docker ...
	I0729 04:58:11.647427   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:58:11.672205   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 04:58:11.672213   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:58:11.676403   23149 logs.go:123] Gathering logs for kube-scheduler [6d9ecfc5a083] ...
	I0729 04:58:11.676412   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d9ecfc5a083"
	I0729 04:58:11.688129   23149 logs.go:123] Gathering logs for kube-proxy [a7b978433727] ...
	I0729 04:58:11.688140   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7b978433727"
	I0729 04:58:11.701356   23149 logs.go:123] Gathering logs for kube-proxy [55fee86287f6] ...
	I0729 04:58:11.701365   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55fee86287f6"
	I0729 04:58:11.713552   23149 logs.go:123] Gathering logs for kube-apiserver [dbc7495e69bc] ...
	I0729 04:58:11.713564   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbc7495e69bc"
	I0729 04:58:11.727895   23149 logs.go:123] Gathering logs for coredns [ec98ea480a95] ...
	I0729 04:58:11.727904   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec98ea480a95"
	I0729 04:58:11.740752   23149 logs.go:123] Gathering logs for kube-controller-manager [55440b9075da] ...
	I0729 04:58:11.740765   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55440b9075da"
	I0729 04:58:11.755941   23149 logs.go:123] Gathering logs for container status ...
	I0729 04:58:11.755952   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:58:11.767572   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:58:11.767582   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 04:58:11.767607   23149 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 04:58:11.767612   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:58:11.767615   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:58:11.767620   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:58:11.767623   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:58:21.772189   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:58:26.775007   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:58:26.775238   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:58:26.794802   23149 logs.go:276] 2 containers: [dbc7495e69bc 2f48303c006e]
	I0729 04:58:26.794896   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:58:26.809639   23149 logs.go:276] 2 containers: [8ba66d0bf7e0 11be061cd908]
	I0729 04:58:26.809710   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:58:26.821698   23149 logs.go:276] 2 containers: [6a0dd7f75f7f ec98ea480a95]
	I0729 04:58:26.821770   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:58:26.832392   23149 logs.go:276] 2 containers: [6d9ecfc5a083 deb9aa52f607]
	I0729 04:58:26.832467   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:58:26.843030   23149 logs.go:276] 2 containers: [a7b978433727 55fee86287f6]
	I0729 04:58:26.843095   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:58:26.853496   23149 logs.go:276] 2 containers: [d72161c2b884 55440b9075da]
	I0729 04:58:26.853565   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:58:26.864600   23149 logs.go:276] 0 containers: []
	W0729 04:58:26.864612   23149 logs.go:278] No container was found matching "kindnet"
	I0729 04:58:26.864665   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:58:26.874883   23149 logs.go:276] 2 containers: [3b462dca7963 c884ccbfc64f]
	I0729 04:58:26.874897   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:58:26.874902   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:58:26.911856   23149 logs.go:123] Gathering logs for etcd [11be061cd908] ...
	I0729 04:58:26.911868   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11be061cd908"
	I0729 04:58:26.927123   23149 logs.go:123] Gathering logs for kube-scheduler [6d9ecfc5a083] ...
	I0729 04:58:26.927135   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d9ecfc5a083"
	I0729 04:58:26.939069   23149 logs.go:123] Gathering logs for kube-controller-manager [d72161c2b884] ...
	I0729 04:58:26.939080   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72161c2b884"
	I0729 04:58:26.956718   23149 logs.go:123] Gathering logs for storage-provisioner [c884ccbfc64f] ...
	I0729 04:58:26.956728   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c884ccbfc64f"
	I0729 04:58:26.967807   23149 logs.go:123] Gathering logs for kube-apiserver [2f48303c006e] ...
	I0729 04:58:26.967817   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f48303c006e"
	I0729 04:58:27.007052   23149 logs.go:123] Gathering logs for coredns [6a0dd7f75f7f] ...
	I0729 04:58:27.007065   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0dd7f75f7f"
	I0729 04:58:27.019216   23149 logs.go:123] Gathering logs for kube-proxy [a7b978433727] ...
	I0729 04:58:27.019226   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7b978433727"
	I0729 04:58:27.031082   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 04:58:27.031096   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 04:58:27.039959   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:58:27.040050   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:58:27.072126   23149 logs.go:123] Gathering logs for kube-apiserver [dbc7495e69bc] ...
	I0729 04:58:27.072133   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbc7495e69bc"
	I0729 04:58:27.089364   23149 logs.go:123] Gathering logs for etcd [8ba66d0bf7e0] ...
	I0729 04:58:27.089383   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba66d0bf7e0"
	I0729 04:58:27.102767   23149 logs.go:123] Gathering logs for coredns [ec98ea480a95] ...
	I0729 04:58:27.102776   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec98ea480a95"
	I0729 04:58:27.113870   23149 logs.go:123] Gathering logs for kube-scheduler [deb9aa52f607] ...
	I0729 04:58:27.113880   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 deb9aa52f607"
	I0729 04:58:27.128867   23149 logs.go:123] Gathering logs for kube-proxy [55fee86287f6] ...
	I0729 04:58:27.128881   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55fee86287f6"
	I0729 04:58:27.140422   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 04:58:27.140432   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:58:27.144900   23149 logs.go:123] Gathering logs for kube-controller-manager [55440b9075da] ...
	I0729 04:58:27.144906   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55440b9075da"
	I0729 04:58:27.164761   23149 logs.go:123] Gathering logs for storage-provisioner [3b462dca7963] ...
	I0729 04:58:27.164771   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b462dca7963"
	I0729 04:58:27.175667   23149 logs.go:123] Gathering logs for Docker ...
	I0729 04:58:27.175678   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:58:27.199966   23149 logs.go:123] Gathering logs for container status ...
	I0729 04:58:27.199974   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:58:27.213252   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:58:27.213263   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 04:58:27.213295   23149 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 04:58:27.213299   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:58:27.213304   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:58:27.213307   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:58:27.213376   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:58:37.217602   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:58:42.219977   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:58:42.220163   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:58:42.235769   23149 logs.go:276] 2 containers: [dbc7495e69bc 2f48303c006e]
	I0729 04:58:42.235857   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:58:42.249505   23149 logs.go:276] 2 containers: [8ba66d0bf7e0 11be061cd908]
	I0729 04:58:42.249574   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:58:42.260413   23149 logs.go:276] 2 containers: [6a0dd7f75f7f ec98ea480a95]
	I0729 04:58:42.260474   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:58:42.276336   23149 logs.go:276] 2 containers: [6d9ecfc5a083 deb9aa52f607]
	I0729 04:58:42.276415   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:58:42.287377   23149 logs.go:276] 2 containers: [a7b978433727 55fee86287f6]
	I0729 04:58:42.287440   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:58:42.299148   23149 logs.go:276] 2 containers: [d72161c2b884 55440b9075da]
	I0729 04:58:42.299225   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:58:42.309101   23149 logs.go:276] 0 containers: []
	W0729 04:58:42.309117   23149 logs.go:278] No container was found matching "kindnet"
	I0729 04:58:42.309182   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:58:42.324109   23149 logs.go:276] 2 containers: [3b462dca7963 c884ccbfc64f]
	I0729 04:58:42.324124   23149 logs.go:123] Gathering logs for kube-scheduler [deb9aa52f607] ...
	I0729 04:58:42.324129   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 deb9aa52f607"
	I0729 04:58:42.345586   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 04:58:42.345597   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:58:42.350487   23149 logs.go:123] Gathering logs for coredns [6a0dd7f75f7f] ...
	I0729 04:58:42.350494   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0dd7f75f7f"
	I0729 04:58:42.361852   23149 logs.go:123] Gathering logs for storage-provisioner [3b462dca7963] ...
	I0729 04:58:42.361864   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b462dca7963"
	I0729 04:58:42.374045   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:58:42.374055   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:58:42.414953   23149 logs.go:123] Gathering logs for kube-apiserver [dbc7495e69bc] ...
	I0729 04:58:42.414964   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbc7495e69bc"
	I0729 04:58:42.428440   23149 logs.go:123] Gathering logs for coredns [ec98ea480a95] ...
	I0729 04:58:42.428451   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec98ea480a95"
	I0729 04:58:42.440090   23149 logs.go:123] Gathering logs for kube-scheduler [6d9ecfc5a083] ...
	I0729 04:58:42.440102   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d9ecfc5a083"
	I0729 04:58:42.452137   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 04:58:42.452147   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 04:58:42.461654   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:58:42.461751   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:58:42.494211   23149 logs.go:123] Gathering logs for kube-apiserver [2f48303c006e] ...
	I0729 04:58:42.494217   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f48303c006e"
	I0729 04:58:42.533053   23149 logs.go:123] Gathering logs for etcd [8ba66d0bf7e0] ...
	I0729 04:58:42.533066   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba66d0bf7e0"
	I0729 04:58:42.548094   23149 logs.go:123] Gathering logs for etcd [11be061cd908] ...
	I0729 04:58:42.548107   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11be061cd908"
	I0729 04:58:42.562910   23149 logs.go:123] Gathering logs for kube-proxy [a7b978433727] ...
	I0729 04:58:42.562921   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7b978433727"
	I0729 04:58:42.574530   23149 logs.go:123] Gathering logs for kube-proxy [55fee86287f6] ...
	I0729 04:58:42.574543   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55fee86287f6"
	I0729 04:58:42.591962   23149 logs.go:123] Gathering logs for kube-controller-manager [d72161c2b884] ...
	I0729 04:58:42.591974   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72161c2b884"
	I0729 04:58:42.614929   23149 logs.go:123] Gathering logs for kube-controller-manager [55440b9075da] ...
	I0729 04:58:42.614946   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55440b9075da"
	I0729 04:58:42.631331   23149 logs.go:123] Gathering logs for storage-provisioner [c884ccbfc64f] ...
	I0729 04:58:42.631340   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c884ccbfc64f"
	I0729 04:58:42.643155   23149 logs.go:123] Gathering logs for Docker ...
	I0729 04:58:42.643165   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:58:42.665927   23149 logs.go:123] Gathering logs for container status ...
	I0729 04:58:42.665934   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:58:42.677789   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:58:42.677799   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 04:58:42.677826   23149 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 04:58:42.677830   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:58:42.677834   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:58:42.677837   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:58:42.677840   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:58:52.681917   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:58:57.684287   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:58:57.684575   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:58:57.715021   23149 logs.go:276] 2 containers: [dbc7495e69bc 2f48303c006e]
	I0729 04:58:57.715152   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:58:57.740912   23149 logs.go:276] 2 containers: [8ba66d0bf7e0 11be061cd908]
	I0729 04:58:57.740989   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:58:57.753614   23149 logs.go:276] 2 containers: [6a0dd7f75f7f ec98ea480a95]
	I0729 04:58:57.753683   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:58:57.764442   23149 logs.go:276] 2 containers: [6d9ecfc5a083 deb9aa52f607]
	I0729 04:58:57.764506   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:58:57.775777   23149 logs.go:276] 2 containers: [a7b978433727 55fee86287f6]
	I0729 04:58:57.775856   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:58:57.786293   23149 logs.go:276] 2 containers: [d72161c2b884 55440b9075da]
	I0729 04:58:57.786369   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:58:57.797262   23149 logs.go:276] 0 containers: []
	W0729 04:58:57.797275   23149 logs.go:278] No container was found matching "kindnet"
	I0729 04:58:57.797334   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:58:57.808267   23149 logs.go:276] 2 containers: [3b462dca7963 c884ccbfc64f]
	I0729 04:58:57.808282   23149 logs.go:123] Gathering logs for etcd [8ba66d0bf7e0] ...
	I0729 04:58:57.808288   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba66d0bf7e0"
	I0729 04:58:57.822005   23149 logs.go:123] Gathering logs for kube-proxy [55fee86287f6] ...
	I0729 04:58:57.822018   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55fee86287f6"
	I0729 04:58:57.834133   23149 logs.go:123] Gathering logs for coredns [ec98ea480a95] ...
	I0729 04:58:57.834145   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec98ea480a95"
	I0729 04:58:57.846429   23149 logs.go:123] Gathering logs for kube-controller-manager [55440b9075da] ...
	I0729 04:58:57.846439   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55440b9075da"
	I0729 04:58:57.863268   23149 logs.go:123] Gathering logs for storage-provisioner [3b462dca7963] ...
	I0729 04:58:57.863282   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b462dca7963"
	I0729 04:58:57.874686   23149 logs.go:123] Gathering logs for storage-provisioner [c884ccbfc64f] ...
	I0729 04:58:57.874696   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c884ccbfc64f"
	I0729 04:58:57.886273   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 04:58:57.886283   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 04:58:57.897771   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:58:57.897864   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:58:57.929882   23149 logs.go:123] Gathering logs for kube-apiserver [2f48303c006e] ...
	I0729 04:58:57.929888   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f48303c006e"
	I0729 04:58:57.968952   23149 logs.go:123] Gathering logs for kube-proxy [a7b978433727] ...
	I0729 04:58:57.968962   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7b978433727"
	I0729 04:58:57.980574   23149 logs.go:123] Gathering logs for container status ...
	I0729 04:58:57.980584   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:58:57.992856   23149 logs.go:123] Gathering logs for etcd [11be061cd908] ...
	I0729 04:58:57.992872   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11be061cd908"
	I0729 04:58:58.007579   23149 logs.go:123] Gathering logs for coredns [6a0dd7f75f7f] ...
	I0729 04:58:58.007589   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0dd7f75f7f"
	I0729 04:58:58.019193   23149 logs.go:123] Gathering logs for kube-apiserver [dbc7495e69bc] ...
	I0729 04:58:58.019204   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbc7495e69bc"
	I0729 04:58:58.033440   23149 logs.go:123] Gathering logs for kube-scheduler [6d9ecfc5a083] ...
	I0729 04:58:58.033451   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d9ecfc5a083"
	I0729 04:58:58.044810   23149 logs.go:123] Gathering logs for kube-scheduler [deb9aa52f607] ...
	I0729 04:58:58.044820   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 deb9aa52f607"
	I0729 04:58:58.060124   23149 logs.go:123] Gathering logs for kube-controller-manager [d72161c2b884] ...
	I0729 04:58:58.060135   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72161c2b884"
	I0729 04:58:58.077328   23149 logs.go:123] Gathering logs for Docker ...
	I0729 04:58:58.077340   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:58:58.101271   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 04:58:58.101278   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:58:58.105615   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:58:58.105624   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:58:58.143280   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:58:58.143291   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 04:58:58.143320   23149 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 04:58:58.143325   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:58:58.143329   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:58:58.143333   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:58:58.143338   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:59:08.147458   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:59:13.149848   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:59:13.150021   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:59:13.172726   23149 logs.go:276] 2 containers: [dbc7495e69bc 2f48303c006e]
	I0729 04:59:13.172822   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:59:13.186923   23149 logs.go:276] 2 containers: [8ba66d0bf7e0 11be061cd908]
	I0729 04:59:13.186993   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:59:13.198619   23149 logs.go:276] 2 containers: [6a0dd7f75f7f ec98ea480a95]
	I0729 04:59:13.198683   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:59:13.214986   23149 logs.go:276] 2 containers: [6d9ecfc5a083 deb9aa52f607]
	I0729 04:59:13.215072   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:59:13.225811   23149 logs.go:276] 2 containers: [a7b978433727 55fee86287f6]
	I0729 04:59:13.225899   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:59:13.236582   23149 logs.go:276] 2 containers: [d72161c2b884 55440b9075da]
	I0729 04:59:13.236655   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:59:13.246866   23149 logs.go:276] 0 containers: []
	W0729 04:59:13.246879   23149 logs.go:278] No container was found matching "kindnet"
	I0729 04:59:13.246938   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:59:13.257240   23149 logs.go:276] 2 containers: [3b462dca7963 c884ccbfc64f]
	I0729 04:59:13.257256   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 04:59:13.257261   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:59:13.262019   23149 logs.go:123] Gathering logs for kube-apiserver [dbc7495e69bc] ...
	I0729 04:59:13.262026   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbc7495e69bc"
	I0729 04:59:13.275792   23149 logs.go:123] Gathering logs for kube-proxy [a7b978433727] ...
	I0729 04:59:13.275802   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7b978433727"
	I0729 04:59:13.287337   23149 logs.go:123] Gathering logs for kube-controller-manager [d72161c2b884] ...
	I0729 04:59:13.287346   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72161c2b884"
	I0729 04:59:13.305052   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 04:59:13.305066   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 04:59:13.314194   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:59:13.314286   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:59:13.347025   23149 logs.go:123] Gathering logs for etcd [11be061cd908] ...
	I0729 04:59:13.347032   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11be061cd908"
	I0729 04:59:13.366452   23149 logs.go:123] Gathering logs for kube-scheduler [deb9aa52f607] ...
	I0729 04:59:13.366465   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 deb9aa52f607"
	I0729 04:59:13.382562   23149 logs.go:123] Gathering logs for kube-apiserver [2f48303c006e] ...
	I0729 04:59:13.382573   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f48303c006e"
	I0729 04:59:13.428518   23149 logs.go:123] Gathering logs for etcd [8ba66d0bf7e0] ...
	I0729 04:59:13.428531   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba66d0bf7e0"
	I0729 04:59:13.444850   23149 logs.go:123] Gathering logs for storage-provisioner [3b462dca7963] ...
	I0729 04:59:13.444865   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b462dca7963"
	I0729 04:59:13.456451   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:59:13.456460   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:59:13.491510   23149 logs.go:123] Gathering logs for coredns [ec98ea480a95] ...
	I0729 04:59:13.491525   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec98ea480a95"
	I0729 04:59:13.508155   23149 logs.go:123] Gathering logs for kube-scheduler [6d9ecfc5a083] ...
	I0729 04:59:13.508171   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d9ecfc5a083"
	I0729 04:59:13.520209   23149 logs.go:123] Gathering logs for kube-proxy [55fee86287f6] ...
	I0729 04:59:13.520221   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55fee86287f6"
	I0729 04:59:13.532900   23149 logs.go:123] Gathering logs for kube-controller-manager [55440b9075da] ...
	I0729 04:59:13.532910   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55440b9075da"
	I0729 04:59:13.548206   23149 logs.go:123] Gathering logs for storage-provisioner [c884ccbfc64f] ...
	I0729 04:59:13.548221   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c884ccbfc64f"
	I0729 04:59:13.565986   23149 logs.go:123] Gathering logs for Docker ...
	I0729 04:59:13.565999   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:59:13.588668   23149 logs.go:123] Gathering logs for container status ...
	I0729 04:59:13.588675   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:59:13.600596   23149 logs.go:123] Gathering logs for coredns [6a0dd7f75f7f] ...
	I0729 04:59:13.600608   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0dd7f75f7f"
	I0729 04:59:13.612057   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:59:13.612067   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 04:59:13.612092   23149 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 04:59:13.612102   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:59:13.612106   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:59:13.612110   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:59:13.612113   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:59:23.616019   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:59:28.618196   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:59:28.618283   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:59:28.633184   23149 logs.go:276] 2 containers: [dbc7495e69bc 2f48303c006e]
	I0729 04:59:28.633260   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:59:28.645580   23149 logs.go:276] 2 containers: [8ba66d0bf7e0 11be061cd908]
	I0729 04:59:28.645635   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:59:28.658091   23149 logs.go:276] 2 containers: [6a0dd7f75f7f ec98ea480a95]
	I0729 04:59:28.658168   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:59:28.671803   23149 logs.go:276] 2 containers: [6d9ecfc5a083 deb9aa52f607]
	I0729 04:59:28.671886   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:59:28.683678   23149 logs.go:276] 2 containers: [a7b978433727 55fee86287f6]
	I0729 04:59:28.683765   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:59:28.695648   23149 logs.go:276] 2 containers: [d72161c2b884 55440b9075da]
	I0729 04:59:28.695741   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:59:28.706926   23149 logs.go:276] 0 containers: []
	W0729 04:59:28.706939   23149 logs.go:278] No container was found matching "kindnet"
	I0729 04:59:28.707002   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:59:28.718773   23149 logs.go:276] 2 containers: [3b462dca7963 c884ccbfc64f]
	I0729 04:59:28.718789   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 04:59:28.718796   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 04:59:28.728622   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:59:28.728718   23149 logs.go:138] Found kubelet problem: Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:59:28.762037   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 04:59:28.762050   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:59:28.767021   23149 logs.go:123] Gathering logs for kube-scheduler [deb9aa52f607] ...
	I0729 04:59:28.767030   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 deb9aa52f607"
	I0729 04:59:28.782859   23149 logs.go:123] Gathering logs for kube-proxy [a7b978433727] ...
	I0729 04:59:28.782870   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7b978433727"
	I0729 04:59:28.794936   23149 logs.go:123] Gathering logs for kube-controller-manager [55440b9075da] ...
	I0729 04:59:28.794949   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55440b9075da"
	I0729 04:59:28.810969   23149 logs.go:123] Gathering logs for etcd [11be061cd908] ...
	I0729 04:59:28.810981   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11be061cd908"
	I0729 04:59:28.825869   23149 logs.go:123] Gathering logs for coredns [6a0dd7f75f7f] ...
	I0729 04:59:28.825882   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0dd7f75f7f"
	I0729 04:59:28.836998   23149 logs.go:123] Gathering logs for kube-scheduler [6d9ecfc5a083] ...
	I0729 04:59:28.837008   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d9ecfc5a083"
	I0729 04:59:28.849144   23149 logs.go:123] Gathering logs for storage-provisioner [c884ccbfc64f] ...
	I0729 04:59:28.849153   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c884ccbfc64f"
	I0729 04:59:28.864645   23149 logs.go:123] Gathering logs for Docker ...
	I0729 04:59:28.864658   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:59:28.888644   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:59:28.888653   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:59:28.923807   23149 logs.go:123] Gathering logs for kube-apiserver [dbc7495e69bc] ...
	I0729 04:59:28.923818   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbc7495e69bc"
	I0729 04:59:28.938548   23149 logs.go:123] Gathering logs for kube-apiserver [2f48303c006e] ...
	I0729 04:59:28.938559   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f48303c006e"
	I0729 04:59:28.979590   23149 logs.go:123] Gathering logs for coredns [ec98ea480a95] ...
	I0729 04:59:28.979608   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec98ea480a95"
	I0729 04:59:28.991786   23149 logs.go:123] Gathering logs for kube-proxy [55fee86287f6] ...
	I0729 04:59:28.991800   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55fee86287f6"
	I0729 04:59:29.003703   23149 logs.go:123] Gathering logs for kube-controller-manager [d72161c2b884] ...
	I0729 04:59:29.003717   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72161c2b884"
	I0729 04:59:29.021362   23149 logs.go:123] Gathering logs for etcd [8ba66d0bf7e0] ...
	I0729 04:59:29.021377   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba66d0bf7e0"
	I0729 04:59:29.035981   23149 logs.go:123] Gathering logs for storage-provisioner [3b462dca7963] ...
	I0729 04:59:29.035990   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b462dca7963"
	I0729 04:59:29.047288   23149 logs.go:123] Gathering logs for container status ...
	I0729 04:59:29.047298   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:59:29.059127   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:59:29.059139   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 04:59:29.059164   23149 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 04:59:29.059168   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: W0729 11:55:07.683495    1914 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	W0729 04:59:29.059172   23149 out.go:239]   Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	  Jul 29 11:55:07 running-upgrade-965000 kubelet[1914]: E0729 11:55:07.683514    1914 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-965000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-965000' and this object
	I0729 04:59:29.059177   23149 out.go:304] Setting ErrFile to fd 2...
	I0729 04:59:29.059180   23149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:59:39.063096   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:59:44.064345   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:59:44.064379   23149 kubeadm.go:597] duration metric: took 4m8.513763959s to restartPrimaryControlPlane
	W0729 04:59:44.064414   23149 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 04:59:44.064430   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 04:59:45.127289   23149 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.062867708s)
	I0729 04:59:45.127377   23149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 04:59:45.132324   23149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:59:45.135093   23149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:59:45.137938   23149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:59:45.137944   23149 kubeadm.go:157] found existing configuration files:
	
	I0729 04:59:45.137967   23149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/admin.conf
	I0729 04:59:45.140972   23149 kubeadm.go:163] "https://control-plane.minikube.internal:54177" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:59:45.140992   23149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:59:45.143956   23149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/kubelet.conf
	I0729 04:59:45.146245   23149 kubeadm.go:163] "https://control-plane.minikube.internal:54177" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:59:45.146267   23149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:59:45.149275   23149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/controller-manager.conf
	I0729 04:59:45.152201   23149 kubeadm.go:163] "https://control-plane.minikube.internal:54177" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:59:45.152222   23149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:59:45.154645   23149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/scheduler.conf
	I0729 04:59:45.157418   23149 kubeadm.go:163] "https://control-plane.minikube.internal:54177" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54177 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:59:45.157440   23149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:59:45.160258   23149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 04:59:45.179471   23149 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 04:59:45.179550   23149 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 04:59:45.229483   23149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 04:59:45.229576   23149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 04:59:45.229626   23149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 04:59:45.277832   23149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 04:59:45.282102   23149 out.go:204]   - Generating certificates and keys ...
	I0729 04:59:45.282138   23149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 04:59:45.282175   23149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 04:59:45.282217   23149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 04:59:45.282248   23149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 04:59:45.282288   23149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 04:59:45.282316   23149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 04:59:45.282348   23149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 04:59:45.282377   23149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 04:59:45.282424   23149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 04:59:45.282457   23149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 04:59:45.282475   23149 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 04:59:45.282501   23149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 04:59:45.360904   23149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 04:59:45.446251   23149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 04:59:45.565019   23149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 04:59:45.666271   23149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 04:59:45.693956   23149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 04:59:45.694877   23149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 04:59:45.694908   23149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 04:59:45.779120   23149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 04:59:45.783272   23149 out.go:204]   - Booting up control plane ...
	I0729 04:59:45.783319   23149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 04:59:45.783359   23149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 04:59:45.783409   23149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 04:59:45.783449   23149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 04:59:45.783531   23149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 04:59:50.786569   23149 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.004316 seconds
	I0729 04:59:50.786767   23149 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 04:59:50.807200   23149 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 04:59:51.317735   23149 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 04:59:51.317926   23149 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-965000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 04:59:51.822406   23149 kubeadm.go:310] [bootstrap-token] Using token: p27sml.cih5vuhlnui8tmes
	I0729 04:59:51.828285   23149 out.go:204]   - Configuring RBAC rules ...
	I0729 04:59:51.828347   23149 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 04:59:51.828392   23149 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 04:59:51.830261   23149 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 04:59:51.831984   23149 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 04:59:51.834426   23149 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 04:59:51.836055   23149 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 04:59:51.839506   23149 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 04:59:52.025998   23149 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 04:59:52.226755   23149 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 04:59:52.227256   23149 kubeadm.go:310] 
	I0729 04:59:52.227287   23149 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 04:59:52.227291   23149 kubeadm.go:310] 
	I0729 04:59:52.227326   23149 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 04:59:52.227329   23149 kubeadm.go:310] 
	I0729 04:59:52.227340   23149 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 04:59:52.227373   23149 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 04:59:52.227402   23149 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 04:59:52.227406   23149 kubeadm.go:310] 
	I0729 04:59:52.227437   23149 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 04:59:52.227441   23149 kubeadm.go:310] 
	I0729 04:59:52.227466   23149 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 04:59:52.227469   23149 kubeadm.go:310] 
	I0729 04:59:52.227495   23149 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 04:59:52.227534   23149 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 04:59:52.227572   23149 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 04:59:52.227575   23149 kubeadm.go:310] 
	I0729 04:59:52.227621   23149 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 04:59:52.227659   23149 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 04:59:52.227664   23149 kubeadm.go:310] 
	I0729 04:59:52.227702   23149 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p27sml.cih5vuhlnui8tmes \
	I0729 04:59:52.227746   23149 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:19abb723ab6eb994cd48198e215993e10e658d429ac48770fbcd96c8643368d2 \
	I0729 04:59:52.227762   23149 kubeadm.go:310] 	--control-plane 
	I0729 04:59:52.227765   23149 kubeadm.go:310] 
	I0729 04:59:52.227817   23149 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 04:59:52.227820   23149 kubeadm.go:310] 
	I0729 04:59:52.227857   23149 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p27sml.cih5vuhlnui8tmes \
	I0729 04:59:52.227915   23149 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:19abb723ab6eb994cd48198e215993e10e658d429ac48770fbcd96c8643368d2 
	I0729 04:59:52.228023   23149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 04:59:52.228033   23149 cni.go:84] Creating CNI manager for ""
	I0729 04:59:52.228049   23149 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:59:52.232443   23149 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 04:59:52.239449   23149 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 04:59:52.242287   23149 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 04:59:52.247071   23149 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 04:59:52.247111   23149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 04:59:52.247131   23149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-965000 minikube.k8s.io/updated_at=2024_07_29T04_59_52_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=running-upgrade-965000 minikube.k8s.io/primary=true
	I0729 04:59:52.293208   23149 kubeadm.go:1113] duration metric: took 46.131417ms to wait for elevateKubeSystemPrivileges
	I0729 04:59:52.293224   23149 ops.go:34] apiserver oom_adj: -16
	I0729 04:59:52.293231   23149 kubeadm.go:394] duration metric: took 4m16.757711417s to StartCluster
	I0729 04:59:52.293241   23149 settings.go:142] acquiring lock: {Name:mkdb53fe54493beaa070cff365444ca7eaee0535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:59:52.293318   23149 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:59:52.294421   23149 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/kubeconfig: {Name:mkedcfdd12fb07fdee08d71279d618976d6521b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:59:52.294848   23149 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:59:52.294871   23149 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 04:59:52.294903   23149 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-965000"
	I0729 04:59:52.294916   23149 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-965000"
	W0729 04:59:52.294920   23149 addons.go:243] addon storage-provisioner should already be in state true
	I0729 04:59:52.294924   23149 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-965000"
	I0729 04:59:52.294932   23149 host.go:66] Checking if "running-upgrade-965000" exists ...
	I0729 04:59:52.294936   23149 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-965000"
	I0729 04:59:52.295125   23149 config.go:182] Loaded profile config "running-upgrade-965000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:59:52.295943   23149 kapi.go:59] client config for running-upgrade-965000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/running-upgrade-965000/client.key", CAFile:"/Users/jenkins/minikube-integration/19338-21024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105fe4080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:59:52.296056   23149 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-965000"
	W0729 04:59:52.296061   23149 addons.go:243] addon default-storageclass should already be in state true
	I0729 04:59:52.296067   23149 host.go:66] Checking if "running-upgrade-965000" exists ...
	I0729 04:59:52.299367   23149 out.go:177] * Verifying Kubernetes components...
	I0729 04:59:52.299746   23149 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 04:59:52.302586   23149 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 04:59:52.302593   23149 sshutil.go:53] new ssh client: &{IP:localhost Port:54112 SSHKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/running-upgrade-965000/id_rsa Username:docker}
	I0729 04:59:52.305358   23149 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:59:52.309373   23149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:59:52.313244   23149 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:59:52.313250   23149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 04:59:52.313256   23149 sshutil.go:53] new ssh client: &{IP:localhost Port:54112 SSHKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/running-upgrade-965000/id_rsa Username:docker}
	I0729 04:59:52.401792   23149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:59:52.407545   23149 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:59:52.407591   23149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:59:52.412084   23149 api_server.go:72] duration metric: took 117.21725ms to wait for apiserver process to appear ...
	I0729 04:59:52.412100   23149 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:59:52.412114   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:59:52.416908   23149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 04:59:52.429954   23149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:59:57.414210   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:59:57.414295   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:02.414958   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:02.414982   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:07.415343   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:07.415364   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:12.415835   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:12.415860   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:17.416502   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:17.416526   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:22.417331   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:22.417351   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 05:00:22.767958   23149 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 05:00:22.773326   23149 out.go:177] * Enabled addons: storage-provisioner
	I0729 05:00:22.781245   23149 addons.go:510] duration metric: took 30.486938125s for enable addons: enabled=[storage-provisioner]
	I0729 05:00:27.418046   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:27.418083   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:32.419445   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:32.419485   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:37.421204   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:37.421257   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:42.423448   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:42.423472   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:47.425761   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:47.425783   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:52.427942   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:52.428133   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:00:52.445344   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:00:52.445405   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:00:52.456015   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:00:52.456075   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:00:52.466486   23149 logs.go:276] 2 containers: [aa845cccf85f 3e75dce8d6fb]
	I0729 05:00:52.466543   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:00:52.477415   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:00:52.477487   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:00:52.488309   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:00:52.488369   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:00:52.503100   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:00:52.503157   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:00:52.518963   23149 logs.go:276] 0 containers: []
	W0729 05:00:52.518974   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:00:52.519025   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:00:52.529239   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:00:52.529257   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:00:52.529263   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:00:52.541300   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:00:52.541311   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:00:52.545766   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:00:52.545772   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:00:52.588400   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:00:52.588410   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:00:52.604625   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:00:52.604634   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:00:52.623176   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:00:52.623186   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:00:52.648131   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:00:52.648138   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:00:52.659532   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:00:52.659542   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:00:52.670997   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:00:52.671009   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:00:52.705214   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:00:52.705223   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:00:52.720127   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:00:52.720140   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:00:52.734326   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:00:52.734336   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:00:52.745242   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:00:52.745256   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:00:55.258562   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:01:00.260808   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:01:00.261108   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:01:00.277813   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:01:00.277886   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:01:00.291520   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:01:00.291608   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:01:00.310340   23149 logs.go:276] 2 containers: [aa845cccf85f 3e75dce8d6fb]
	I0729 05:01:00.310421   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:01:00.322185   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:01:00.322253   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:01:00.332864   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:01:00.332928   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:01:00.343461   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:01:00.343528   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:01:00.354328   23149 logs.go:276] 0 containers: []
	W0729 05:01:00.354339   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:01:00.354420   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:01:00.364555   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:01:00.364569   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:01:00.364577   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:01:00.379400   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:01:00.379411   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:01:00.391345   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:01:00.391356   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:01:00.414099   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:01:00.414111   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:01:00.432117   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:01:00.432128   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:01:00.443707   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:01:00.443719   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:01:00.468711   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:01:00.468721   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:01:00.503584   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:01:00.503592   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:01:00.508216   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:01:00.508224   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:01:00.550259   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:01:00.550270   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:01:00.566224   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:01:00.566236   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:01:00.578802   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:01:00.578812   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:01:00.589989   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:01:00.590002   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:01:03.103281   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:01:08.105713   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:01:08.106020   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:01:08.127715   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:01:08.127811   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:01:08.143790   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:01:08.143876   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:01:08.156047   23149 logs.go:276] 2 containers: [aa845cccf85f 3e75dce8d6fb]
	I0729 05:01:08.156123   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:01:08.167301   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:01:08.167380   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:01:08.182063   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:01:08.182135   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:01:08.192764   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:01:08.192831   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:01:08.203928   23149 logs.go:276] 0 containers: []
	W0729 05:01:08.203941   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:01:08.203998   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:01:08.217015   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:01:08.217031   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:01:08.217036   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:01:08.229085   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:01:08.229096   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:01:08.262385   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:01:08.262393   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:01:08.299777   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:01:08.299788   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:01:08.314419   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:01:08.314435   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:01:08.328572   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:01:08.328587   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:01:08.343146   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:01:08.343160   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:01:08.360600   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:01:08.360609   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:01:08.385545   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:01:08.385553   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:01:08.389681   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:01:08.389690   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:01:08.402065   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:01:08.402080   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:01:08.413761   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:01:08.413772   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:01:08.425802   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:01:08.425816   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:01:10.939928   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:01:15.942205   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:01:15.942380   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:01:15.957896   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:01:15.957975   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:01:15.970892   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:01:15.970961   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:01:15.986718   23149 logs.go:276] 2 containers: [aa845cccf85f 3e75dce8d6fb]
	I0729 05:01:15.986784   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:01:15.997154   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:01:15.997224   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:01:16.007393   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:01:16.007459   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:01:16.018288   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:01:16.018363   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:01:16.027953   23149 logs.go:276] 0 containers: []
	W0729 05:01:16.027965   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:01:16.028022   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:01:16.038117   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:01:16.038131   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:01:16.038139   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:01:16.049267   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:01:16.049279   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:01:16.064690   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:01:16.064703   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:01:16.080384   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:01:16.080396   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:01:16.116678   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:01:16.116694   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:01:16.155201   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:01:16.155213   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:01:16.176753   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:01:16.176768   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:01:16.201891   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:01:16.201901   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:01:16.213931   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:01:16.213947   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:01:16.239092   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:01:16.239102   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:01:16.250529   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:01:16.250542   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:01:16.255963   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:01:16.255973   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:01:16.269755   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:01:16.269765   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:01:18.782274   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:01:23.784449   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:01:23.784591   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:01:23.801466   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:01:23.801552   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:01:23.815245   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:01:23.815313   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:01:23.825937   23149 logs.go:276] 2 containers: [aa845cccf85f 3e75dce8d6fb]
	I0729 05:01:23.826009   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:01:23.835750   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:01:23.835818   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:01:23.846018   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:01:23.846078   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:01:23.856110   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:01:23.856173   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:01:23.866401   23149 logs.go:276] 0 containers: []
	W0729 05:01:23.866412   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:01:23.866468   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:01:23.876720   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:01:23.876734   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:01:23.876739   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:01:23.881119   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:01:23.881130   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:01:23.919642   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:01:23.919656   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:01:23.934098   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:01:23.934108   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:01:23.951994   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:01:23.952005   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:01:23.963416   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:01:23.963429   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:01:23.978626   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:01:23.978636   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:01:23.990811   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:01:23.990822   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:01:24.025368   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:01:24.025377   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:01:24.047823   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:01:24.047830   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:01:24.058990   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:01:24.059006   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:01:24.070784   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:01:24.070798   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:01:24.089626   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:01:24.089636   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:01:26.604283   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:01:31.606839   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:01:31.607146   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:01:31.637013   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:01:31.637123   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:01:31.656240   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:01:31.656336   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:01:31.671330   23149 logs.go:276] 2 containers: [aa845cccf85f 3e75dce8d6fb]
	I0729 05:01:31.671401   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:01:31.686605   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:01:31.686677   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:01:31.697138   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:01:31.697213   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:01:31.707951   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:01:31.708017   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:01:31.719142   23149 logs.go:276] 0 containers: []
	W0729 05:01:31.719151   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:01:31.719204   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:01:31.729697   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:01:31.729711   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:01:31.729717   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:01:31.747150   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:01:31.747161   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:01:31.758921   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:01:31.758930   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:01:31.770726   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:01:31.770739   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:01:31.788879   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:01:31.788890   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:01:31.800084   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:01:31.800096   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:01:31.815404   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:01:31.815415   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:01:31.829965   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:01:31.829975   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:01:31.841471   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:01:31.841483   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:01:31.855376   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:01:31.855387   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:01:31.881427   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:01:31.881438   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:01:31.916585   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:01:31.916594   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:01:31.920964   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:01:31.920970   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:01:34.458851   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:01:39.461210   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:01:39.461698   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:01:39.498997   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:01:39.499132   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:01:39.521396   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:01:39.521488   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:01:39.539103   23149 logs.go:276] 2 containers: [aa845cccf85f 3e75dce8d6fb]
	I0729 05:01:39.539193   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:01:39.554236   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:01:39.554319   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:01:39.567282   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:01:39.567357   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:01:39.579614   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:01:39.579681   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:01:39.591343   23149 logs.go:276] 0 containers: []
	W0729 05:01:39.591355   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:01:39.591425   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:01:39.609845   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:01:39.609861   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:01:39.609866   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:01:39.623985   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:01:39.623997   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:01:39.638356   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:01:39.638369   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:01:39.664457   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:01:39.664469   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:01:39.700668   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:01:39.700685   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:01:39.705186   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:01:39.705195   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:01:39.716935   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:01:39.716946   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:01:39.741975   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:01:39.741987   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:01:39.760650   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:01:39.760667   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:01:39.779261   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:01:39.779271   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:01:39.793769   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:01:39.793780   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:01:39.829741   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:01:39.829753   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:01:39.849980   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:01:39.849991   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:01:42.366185   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:01:47.367661   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:01:47.367862   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:01:47.385411   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:01:47.385498   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:01:47.398611   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:01:47.398683   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:01:47.410221   23149 logs.go:276] 2 containers: [aa845cccf85f 3e75dce8d6fb]
	I0729 05:01:47.410290   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:01:47.421047   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:01:47.421119   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:01:47.431774   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:01:47.431848   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:01:47.442399   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:01:47.442455   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:01:47.452903   23149 logs.go:276] 0 containers: []
	W0729 05:01:47.452914   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:01:47.452971   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:01:47.463427   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:01:47.463444   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:01:47.463449   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:01:47.468275   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:01:47.468282   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:01:47.482483   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:01:47.482494   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:01:47.500245   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:01:47.500255   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:01:47.511271   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:01:47.511282   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:01:47.526084   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:01:47.526094   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:01:47.538078   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:01:47.538089   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:01:47.561058   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:01:47.561072   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:01:47.594707   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:01:47.594716   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:01:47.629574   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:01:47.629583   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:01:47.643611   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:01:47.643620   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:01:47.655171   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:01:47.655182   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:01:47.667402   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:01:47.667413   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:01:50.179570   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:01:55.181760   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:01:55.181980   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:01:55.199328   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:01:55.199413   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:01:55.213799   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:01:55.213874   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:01:55.224104   23149 logs.go:276] 2 containers: [aa845cccf85f 3e75dce8d6fb]
	I0729 05:01:55.224164   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:01:55.234810   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:01:55.234870   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:01:55.246723   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:01:55.246807   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:01:55.257393   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:01:55.257456   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:01:55.268550   23149 logs.go:276] 0 containers: []
	W0729 05:01:55.268566   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:01:55.268619   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:01:55.279454   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:01:55.279469   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:01:55.279475   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:01:55.292657   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:01:55.292668   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:01:55.317261   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:01:55.317271   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:01:55.329260   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:01:55.329270   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:01:55.364911   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:01:55.364921   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:01:55.379400   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:01:55.379412   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:01:55.393747   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:01:55.393760   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:01:55.408545   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:01:55.408555   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:01:55.420173   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:01:55.420182   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:01:55.437641   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:01:55.437650   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:01:55.448556   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:01:55.448566   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:01:55.482147   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:01:55.482155   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:01:55.486862   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:01:55.486869   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:01:57.999243   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:02:03.001471   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:02:03.001641   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:02:03.020428   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:02:03.020523   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:02:03.036833   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:02:03.036905   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:02:03.048786   23149 logs.go:276] 2 containers: [aa845cccf85f 3e75dce8d6fb]
	I0729 05:02:03.048862   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:02:03.062065   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:02:03.062137   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:02:03.072711   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:02:03.072783   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:02:03.083056   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:02:03.083121   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:02:03.093902   23149 logs.go:276] 0 containers: []
	W0729 05:02:03.093913   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:02:03.093974   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:02:03.104571   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:02:03.104588   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:02:03.104594   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:02:03.125157   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:02:03.125168   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:02:03.139625   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:02:03.139637   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:02:03.151069   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:02:03.151080   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:02:03.162875   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:02:03.162887   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:02:03.167846   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:02:03.167853   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:02:03.182456   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:02:03.182466   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:02:03.194325   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:02:03.194336   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:02:03.205503   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:02:03.205515   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:02:03.217012   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:02:03.217022   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:02:03.235134   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:02:03.235144   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:02:03.259929   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:02:03.259946   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:02:03.308883   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:02:03.308898   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:02:05.883427   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:02:10.885805   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:02:10.886165   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:02:10.924745   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:02:10.924897   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:02:10.947984   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:02:10.948091   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:02:10.966971   23149 logs.go:276] 4 containers: [389e780b420b d3c97d2c207a aa845cccf85f 3e75dce8d6fb]
	I0729 05:02:10.967050   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:02:10.979228   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:02:10.979297   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:02:10.991030   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:02:10.991103   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:02:11.001850   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:02:11.001916   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:02:11.012811   23149 logs.go:276] 0 containers: []
	W0729 05:02:11.012823   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:02:11.012882   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:02:11.023273   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:02:11.023292   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:02:11.023297   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:02:11.046568   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:02:11.046575   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:02:11.058339   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:02:11.058352   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:02:11.092353   23149 logs.go:123] Gathering logs for coredns [389e780b420b] ...
	I0729 05:02:11.092363   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 389e780b420b"
	I0729 05:02:11.103831   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:02:11.103841   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:02:11.116627   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:02:11.116638   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:02:11.137291   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:02:11.137300   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:02:11.151834   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:02:11.151844   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:02:11.163611   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:02:11.163622   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:02:11.175854   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:02:11.175868   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:02:11.190487   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:02:11.190497   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:02:11.207498   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:02:11.207508   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:02:11.212353   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:02:11.212363   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:02:11.247001   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:02:11.247013   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:02:11.261320   23149 logs.go:123] Gathering logs for coredns [d3c97d2c207a] ...
	I0729 05:02:11.261332   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c97d2c207a"
	I0729 05:02:13.774823   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:02:18.777230   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:02:18.777588   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:02:18.809256   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:02:18.809387   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:02:18.827971   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:02:18.828063   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:02:18.846896   23149 logs.go:276] 4 containers: [389e780b420b d3c97d2c207a aa845cccf85f 3e75dce8d6fb]
	I0729 05:02:18.846976   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:02:18.858767   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:02:18.858847   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:02:18.869980   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:02:18.870052   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:02:18.881062   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:02:18.881126   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:02:18.895167   23149 logs.go:276] 0 containers: []
	W0729 05:02:18.895179   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:02:18.895235   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:02:18.905557   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:02:18.905579   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:02:18.905586   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:02:18.918975   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:02:18.918988   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:02:18.930527   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:02:18.930540   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:02:18.942057   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:02:18.942070   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:02:18.978080   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:02:18.978092   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:02:18.990433   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:02:18.990444   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:02:19.002578   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:02:19.002595   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:02:19.018316   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:02:19.018331   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:02:19.053382   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:02:19.053392   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:02:19.082387   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:02:19.082400   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:02:19.100518   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:02:19.100530   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:02:19.118811   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:02:19.118821   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:02:19.143651   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:02:19.143659   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:02:19.148092   23149 logs.go:123] Gathering logs for coredns [389e780b420b] ...
	I0729 05:02:19.148099   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 389e780b420b"
	I0729 05:02:19.159375   23149 logs.go:123] Gathering logs for coredns [d3c97d2c207a] ...
	I0729 05:02:19.159384   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c97d2c207a"
	I0729 05:02:21.673388   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:02:26.675685   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:02:26.675900   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:02:26.701908   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:02:26.702023   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:02:26.718846   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:02:26.718936   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:02:26.732169   23149 logs.go:276] 4 containers: [389e780b420b d3c97d2c207a aa845cccf85f 3e75dce8d6fb]
	I0729 05:02:26.732252   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:02:26.743626   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:02:26.743704   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:02:26.755305   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:02:26.755374   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:02:26.765789   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:02:26.765858   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:02:26.775958   23149 logs.go:276] 0 containers: []
	W0729 05:02:26.775968   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:02:26.776024   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:02:26.786275   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:02:26.786294   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:02:26.786300   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:02:26.800114   23149 logs.go:123] Gathering logs for coredns [389e780b420b] ...
	I0729 05:02:26.800126   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 389e780b420b"
	I0729 05:02:26.815918   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:02:26.815931   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:02:26.829358   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:02:26.829367   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:02:26.834427   23149 logs.go:123] Gathering logs for coredns [d3c97d2c207a] ...
	I0729 05:02:26.834437   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c97d2c207a"
	I0729 05:02:26.846156   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:02:26.846167   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:02:26.860733   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:02:26.860743   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:02:26.872530   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:02:26.872542   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:02:26.883701   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:02:26.883714   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:02:26.907405   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:02:26.907413   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:02:26.941401   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:02:26.941409   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:02:26.955440   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:02:26.955449   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:02:26.973583   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:02:26.973592   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:02:27.011713   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:02:27.011725   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:02:27.023775   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:02:27.023789   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:02:29.538762   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:02:34.541407   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:02:34.541752   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:02:34.584250   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:02:34.584376   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:02:34.607065   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:02:34.607153   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:02:34.621173   23149 logs.go:276] 4 containers: [389e780b420b d3c97d2c207a aa845cccf85f 3e75dce8d6fb]
	I0729 05:02:34.621253   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:02:34.633298   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:02:34.633371   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:02:34.644833   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:02:34.644897   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:02:34.655848   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:02:34.655922   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:02:34.667011   23149 logs.go:276] 0 containers: []
	W0729 05:02:34.667024   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:02:34.667079   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:02:34.677505   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:02:34.677523   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:02:34.677528   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:02:34.692565   23149 logs.go:123] Gathering logs for coredns [389e780b420b] ...
	I0729 05:02:34.692577   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 389e780b420b"
	I0729 05:02:34.704240   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:02:34.704249   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:02:34.716125   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:02:34.716136   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:02:34.727719   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:02:34.727735   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:02:34.740113   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:02:34.740126   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:02:34.744871   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:02:34.744878   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:02:34.780648   23149 logs.go:123] Gathering logs for coredns [d3c97d2c207a] ...
	I0729 05:02:34.780659   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c97d2c207a"
	I0729 05:02:34.792237   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:02:34.792247   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:02:34.803453   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:02:34.803465   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:02:34.821309   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:02:34.821322   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:02:34.847417   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:02:34.847429   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:02:34.861808   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:02:34.861822   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:02:34.877034   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:02:34.877045   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:02:34.897499   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:02:34.897513   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:02:37.435322   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:02:42.436359   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:02:42.436518   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:02:42.447442   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:02:42.447515   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:02:42.457664   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:02:42.457726   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:02:42.468409   23149 logs.go:276] 4 containers: [389e780b420b d3c97d2c207a aa845cccf85f 3e75dce8d6fb]
	I0729 05:02:42.468483   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:02:42.478634   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:02:42.478705   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:02:42.489562   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:02:42.489637   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:02:42.500205   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:02:42.500271   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:02:42.511259   23149 logs.go:276] 0 containers: []
	W0729 05:02:42.511273   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:02:42.511336   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:02:42.522487   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:02:42.522504   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:02:42.522510   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:02:42.537134   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:02:42.537147   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:02:42.554638   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:02:42.554650   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:02:42.578170   23149 logs.go:123] Gathering logs for coredns [389e780b420b] ...
	I0729 05:02:42.578179   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 389e780b420b"
	I0729 05:02:42.590118   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:02:42.590127   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:02:42.601878   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:02:42.601890   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:02:42.614370   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:02:42.614382   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:02:42.648695   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:02:42.648704   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:02:42.653349   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:02:42.653355   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:02:42.667447   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:02:42.667456   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:02:42.703116   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:02:42.703127   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:02:42.717251   23149 logs.go:123] Gathering logs for coredns [d3c97d2c207a] ...
	I0729 05:02:42.717262   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c97d2c207a"
	I0729 05:02:42.729562   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:02:42.729572   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:02:42.741098   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:02:42.741108   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:02:42.753164   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:02:42.753177   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:02:45.266382   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:02:50.268659   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:02:50.268762   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:02:50.279976   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:02:50.280053   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:02:50.295429   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:02:50.295495   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:02:50.306173   23149 logs.go:276] 4 containers: [389e780b420b d3c97d2c207a aa845cccf85f 3e75dce8d6fb]
	I0729 05:02:50.306243   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:02:50.317129   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:02:50.317193   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:02:50.327657   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:02:50.327716   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:02:50.338094   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:02:50.338161   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:02:50.353649   23149 logs.go:276] 0 containers: []
	W0729 05:02:50.353662   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:02:50.353725   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:02:50.364621   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:02:50.364637   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:02:50.364642   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:02:50.379225   23149 logs.go:123] Gathering logs for coredns [d3c97d2c207a] ...
	I0729 05:02:50.379236   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c97d2c207a"
	I0729 05:02:50.391163   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:02:50.391173   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:02:50.402794   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:02:50.402807   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:02:50.416963   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:02:50.416974   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:02:50.421287   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:02:50.421296   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:02:50.456805   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:02:50.456815   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:02:50.469055   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:02:50.469066   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:02:50.484122   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:02:50.484132   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:02:50.501431   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:02:50.501441   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:02:50.536077   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:02:50.536086   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:02:50.550491   23149 logs.go:123] Gathering logs for coredns [389e780b420b] ...
	I0729 05:02:50.550501   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 389e780b420b"
	I0729 05:02:50.562266   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:02:50.562278   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:02:50.574352   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:02:50.574363   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:02:50.588617   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:02:50.588626   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:02:53.114991   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:02:58.116777   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:02:58.117094   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:02:58.147766   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:02:58.147892   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:02:58.165871   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:02:58.165967   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:02:58.179671   23149 logs.go:276] 4 containers: [389e780b420b d3c97d2c207a aa845cccf85f 3e75dce8d6fb]
	I0729 05:02:58.179742   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:02:58.191478   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:02:58.191551   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:02:58.202393   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:02:58.202460   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:02:58.212680   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:02:58.212737   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:02:58.223276   23149 logs.go:276] 0 containers: []
	W0729 05:02:58.223288   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:02:58.223351   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:02:58.234434   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:02:58.234452   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:02:58.234458   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:02:58.268748   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:02:58.268762   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:02:58.284843   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:02:58.284855   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:02:58.297580   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:02:58.297592   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:02:58.334317   23149 logs.go:123] Gathering logs for coredns [d3c97d2c207a] ...
	I0729 05:02:58.334332   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c97d2c207a"
	I0729 05:02:58.347973   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:02:58.347983   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:02:58.359522   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:02:58.359532   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:02:58.377659   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:02:58.377669   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:02:58.393194   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:02:58.393204   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:02:58.398362   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:02:58.398369   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:02:58.412527   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:02:58.412543   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:02:58.433328   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:02:58.433342   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:02:58.458740   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:02:58.458748   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:02:58.472961   23149 logs.go:123] Gathering logs for coredns [389e780b420b] ...
	I0729 05:02:58.472971   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 389e780b420b"
	I0729 05:02:58.485140   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:02:58.485151   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:03:00.999980   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:03:06.002613   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:03:06.002751   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:03:06.016711   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:03:06.016789   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:03:06.028323   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:03:06.028393   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:03:06.043741   23149 logs.go:276] 4 containers: [389e780b420b d3c97d2c207a aa845cccf85f 3e75dce8d6fb]
	I0729 05:03:06.043808   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:03:06.054198   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:03:06.054263   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:03:06.065134   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:03:06.065200   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:03:06.075600   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:03:06.075669   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:03:06.086094   23149 logs.go:276] 0 containers: []
	W0729 05:03:06.086113   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:03:06.086174   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:03:06.097020   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:03:06.097038   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:03:06.097044   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:03:06.135038   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:03:06.135050   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:03:06.147306   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:03:06.147316   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:03:06.158921   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:03:06.158933   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:03:06.180559   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:03:06.180570   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:03:06.192543   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:03:06.192554   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:03:06.213656   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:03:06.213666   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:03:06.219091   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:03:06.219099   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:03:06.233464   23149 logs.go:123] Gathering logs for coredns [389e780b420b] ...
	I0729 05:03:06.233475   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 389e780b420b"
	I0729 05:03:06.246047   23149 logs.go:123] Gathering logs for coredns [d3c97d2c207a] ...
	I0729 05:03:06.246057   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c97d2c207a"
	I0729 05:03:06.258333   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:03:06.258342   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:03:06.270817   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:03:06.270827   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:03:06.283003   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:03:06.283014   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:03:06.319522   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:03:06.319535   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:03:06.344506   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:03:06.344515   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:03:08.862165   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:03:13.864454   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:03:13.864696   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:03:13.900982   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:03:13.901074   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:03:13.917179   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:03:13.917249   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:03:13.930297   23149 logs.go:276] 4 containers: [389e780b420b d3c97d2c207a aa845cccf85f 3e75dce8d6fb]
	I0729 05:03:13.930363   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:03:13.941673   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:03:13.941745   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:03:13.952283   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:03:13.952342   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:03:13.965039   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:03:13.965114   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:03:13.975115   23149 logs.go:276] 0 containers: []
	W0729 05:03:13.975126   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:03:13.975188   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:03:13.992577   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:03:13.992597   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:03:13.992604   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:03:13.997400   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:03:13.997408   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:03:14.009761   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:03:14.009773   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:03:14.021349   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:03:14.021360   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:03:14.045983   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:03:14.045994   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:03:14.081177   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:03:14.081189   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:03:14.120105   23149 logs.go:123] Gathering logs for coredns [389e780b420b] ...
	I0729 05:03:14.120119   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 389e780b420b"
	I0729 05:03:14.131563   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:03:14.131575   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:03:14.147263   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:03:14.147274   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:03:14.158794   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:03:14.158806   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:03:14.176765   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:03:14.176779   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:03:14.188777   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:03:14.188792   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:03:14.203158   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:03:14.203170   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:03:14.217408   23149 logs.go:123] Gathering logs for coredns [d3c97d2c207a] ...
	I0729 05:03:14.217416   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c97d2c207a"
	I0729 05:03:14.228723   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:03:14.228738   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:03:16.741984   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:03:21.742787   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:03:21.743160   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:03:21.774526   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:03:21.774656   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:03:21.792387   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:03:21.792484   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:03:21.806345   23149 logs.go:276] 4 containers: [389e780b420b d3c97d2c207a aa845cccf85f 3e75dce8d6fb]
	I0729 05:03:21.806418   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:03:21.817949   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:03:21.818012   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:03:21.831656   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:03:21.831728   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:03:21.842375   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:03:21.842450   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:03:21.853432   23149 logs.go:276] 0 containers: []
	W0729 05:03:21.853445   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:03:21.853505   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:03:21.864144   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:03:21.864163   23149 logs.go:123] Gathering logs for coredns [389e780b420b] ...
	I0729 05:03:21.864169   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 389e780b420b"
	I0729 05:03:21.879884   23149 logs.go:123] Gathering logs for coredns [d3c97d2c207a] ...
	I0729 05:03:21.879900   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c97d2c207a"
	I0729 05:03:21.891315   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:03:21.891325   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:03:21.903113   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:03:21.903125   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:03:21.917337   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:03:21.917349   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:03:21.940080   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:03:21.940090   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:03:21.953786   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:03:21.953797   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:03:21.965986   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:03:21.966000   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:03:22.002332   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:03:22.002346   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:03:22.007332   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:03:22.007340   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:03:22.021413   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:03:22.021423   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:03:22.039945   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:03:22.039956   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:03:22.079789   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:03:22.079799   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:03:22.096990   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:03:22.097001   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:03:22.110770   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:03:22.110783   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:03:24.624553   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:03:29.626810   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:03:29.626960   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:03:29.639856   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:03:29.639927   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:03:29.652364   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:03:29.652445   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:03:29.664321   23149 logs.go:276] 4 containers: [389e780b420b d3c97d2c207a aa845cccf85f 3e75dce8d6fb]
	I0729 05:03:29.664394   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:03:29.675457   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:03:29.675528   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:03:29.686953   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:03:29.687026   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:03:29.697274   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:03:29.697347   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:03:29.707561   23149 logs.go:276] 0 containers: []
	W0729 05:03:29.707572   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:03:29.707632   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:03:29.718254   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:03:29.718273   23149 logs.go:123] Gathering logs for coredns [d3c97d2c207a] ...
	I0729 05:03:29.718278   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c97d2c207a"
	I0729 05:03:29.729960   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:03:29.729974   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:03:29.742019   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:03:29.742031   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:03:29.766631   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:03:29.766644   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:03:29.800161   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:03:29.800169   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:03:29.815263   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:03:29.815276   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:03:29.826884   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:03:29.826893   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:03:29.831273   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:03:29.831281   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:03:29.866343   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:03:29.866355   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:03:29.878344   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:03:29.878356   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:03:29.896002   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:03:29.896013   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:03:29.907616   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:03:29.907626   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:03:29.921545   23149 logs.go:123] Gathering logs for coredns [389e780b420b] ...
	I0729 05:03:29.921555   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 389e780b420b"
	I0729 05:03:29.933046   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:03:29.933057   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:03:29.954441   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:03:29.954455   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:03:32.467950   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:03:37.470090   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:03:37.470341   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:03:37.488537   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:03:37.488635   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:03:37.502268   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:03:37.502347   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:03:37.514364   23149 logs.go:276] 4 containers: [389e780b420b d3c97d2c207a aa845cccf85f 3e75dce8d6fb]
	I0729 05:03:37.514441   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:03:37.525429   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:03:37.525502   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:03:37.537515   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:03:37.537585   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:03:37.548636   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:03:37.548704   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:03:37.565224   23149 logs.go:276] 0 containers: []
	W0729 05:03:37.565238   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:03:37.565303   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:03:37.576354   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:03:37.576376   23149 logs.go:123] Gathering logs for coredns [389e780b420b] ...
	I0729 05:03:37.576381   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 389e780b420b"
	I0729 05:03:37.587769   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:03:37.587781   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:03:37.602214   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:03:37.602224   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:03:37.622066   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:03:37.622076   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:03:37.659275   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:03:37.659286   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:03:37.679972   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:03:37.679984   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:03:37.697599   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:03:37.697609   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:03:37.712764   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:03:37.712775   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:03:37.736783   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:03:37.736792   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:03:37.748044   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:03:37.748057   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:03:37.781905   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:03:37.781913   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:03:37.793393   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:03:37.793408   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:03:37.805393   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:03:37.805404   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:03:37.816859   23149 logs.go:123] Gathering logs for coredns [d3c97d2c207a] ...
	I0729 05:03:37.816869   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c97d2c207a"
	I0729 05:03:37.828259   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:03:37.828273   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:03:40.335308   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:03:45.337423   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:03:45.337886   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:03:45.378196   23149 logs.go:276] 1 containers: [d71d59296a6d]
	I0729 05:03:45.378334   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:03:45.399121   23149 logs.go:276] 1 containers: [fb4efc2f298b]
	I0729 05:03:45.399222   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:03:45.420567   23149 logs.go:276] 4 containers: [389e780b420b d3c97d2c207a aa845cccf85f 3e75dce8d6fb]
	I0729 05:03:45.420640   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:03:45.432650   23149 logs.go:276] 1 containers: [5967b8d04dab]
	I0729 05:03:45.432712   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:03:45.443892   23149 logs.go:276] 1 containers: [661ce72fecba]
	I0729 05:03:45.443960   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:03:45.454494   23149 logs.go:276] 1 containers: [be5618e93173]
	I0729 05:03:45.454550   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:03:45.464893   23149 logs.go:276] 0 containers: []
	W0729 05:03:45.464905   23149 logs.go:278] No container was found matching "kindnet"
	I0729 05:03:45.464964   23149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:03:45.476034   23149 logs.go:276] 1 containers: [9d81d423ae68]
	I0729 05:03:45.476052   23149 logs.go:123] Gathering logs for kube-controller-manager [be5618e93173] ...
	I0729 05:03:45.476058   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5618e93173"
	I0729 05:03:45.494035   23149 logs.go:123] Gathering logs for kubelet ...
	I0729 05:03:45.494046   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 05:03:45.527864   23149 logs.go:123] Gathering logs for etcd [fb4efc2f298b] ...
	I0729 05:03:45.527873   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4efc2f298b"
	I0729 05:03:45.541773   23149 logs.go:123] Gathering logs for kube-apiserver [d71d59296a6d] ...
	I0729 05:03:45.541786   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71d59296a6d"
	I0729 05:03:45.557960   23149 logs.go:123] Gathering logs for coredns [389e780b420b] ...
	I0729 05:03:45.557971   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 389e780b420b"
	I0729 05:03:45.570218   23149 logs.go:123] Gathering logs for coredns [aa845cccf85f] ...
	I0729 05:03:45.570229   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa845cccf85f"
	I0729 05:03:45.582031   23149 logs.go:123] Gathering logs for kube-proxy [661ce72fecba] ...
	I0729 05:03:45.582041   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661ce72fecba"
	I0729 05:03:45.596975   23149 logs.go:123] Gathering logs for Docker ...
	I0729 05:03:45.596985   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:03:45.623431   23149 logs.go:123] Gathering logs for container status ...
	I0729 05:03:45.623443   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:03:45.635663   23149 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:03:45.635678   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:03:45.674508   23149 logs.go:123] Gathering logs for coredns [d3c97d2c207a] ...
	I0729 05:03:45.674522   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c97d2c207a"
	I0729 05:03:45.686832   23149 logs.go:123] Gathering logs for kube-scheduler [5967b8d04dab] ...
	I0729 05:03:45.686843   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5967b8d04dab"
	I0729 05:03:45.703391   23149 logs.go:123] Gathering logs for storage-provisioner [9d81d423ae68] ...
	I0729 05:03:45.703407   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d81d423ae68"
	I0729 05:03:45.714586   23149 logs.go:123] Gathering logs for dmesg ...
	I0729 05:03:45.714601   23149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:03:45.719167   23149 logs.go:123] Gathering logs for coredns [3e75dce8d6fb] ...
	I0729 05:03:45.719174   23149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e75dce8d6fb"
	I0729 05:03:48.232957   23149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:03:53.235440   23149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:03:53.240102   23149 out.go:177] 
	W0729 05:03:53.244104   23149 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 05:03:53.244125   23149 out.go:239] * 
	* 
	W0729 05:03:53.245693   23149 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:03:53.255007   23149 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-965000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-29 05:03:53.346789 -0700 PDT m=+1264.175951751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-965000 -n running-upgrade-965000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-965000 -n running-upgrade-965000: exit status 2 (15.734198416s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-965000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-394000 sudo cat                            | cilium-394000             | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-394000 sudo cat                            | cilium-394000             | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-394000 sudo                                | cilium-394000             | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-394000 sudo                                | cilium-394000             | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-394000 sudo                                | cilium-394000             | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-394000 sudo cat                            | cilium-394000             | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-394000 sudo cat                            | cilium-394000             | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-394000 sudo                                | cilium-394000             | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-394000 sudo                                | cilium-394000             | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-394000 sudo                                | cilium-394000             | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-394000 sudo find                           | cilium-394000             | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-394000 sudo crio                           | cilium-394000             | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-394000                                     | cilium-394000             | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT | 29 Jul 24 04:53 PDT |
	| start   | -p kubernetes-upgrade-530000                         | kubernetes-upgrade-530000 | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-754000                             | offline-docker-754000     | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT | 29 Jul 24 04:53 PDT |
	| start   | -p stopped-upgrade-370000                            | minikube                  | jenkins | v1.26.0 | 29 Jul 24 04:53 PDT | 29 Jul 24 04:54 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-530000                         | kubernetes-upgrade-530000 | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT | 29 Jul 24 04:53 PDT |
	| start   | -p kubernetes-upgrade-530000                         | kubernetes-upgrade-530000 | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-530000                         | kubernetes-upgrade-530000 | jenkins | v1.33.1 | 29 Jul 24 04:53 PDT | 29 Jul 24 04:53 PDT |
	| start   | -p running-upgrade-965000                            | minikube                  | jenkins | v1.26.0 | 29 Jul 24 04:53 PDT | 29 Jul 24 04:54 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-370000 stop                          | minikube                  | jenkins | v1.26.0 | 29 Jul 24 04:54 PDT | 29 Jul 24 04:54 PDT |
	| start   | -p stopped-upgrade-370000                            | stopped-upgrade-370000    | jenkins | v1.33.1 | 29 Jul 24 04:54 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-965000                            | running-upgrade-965000    | jenkins | v1.33.1 | 29 Jul 24 04:54 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-370000                            | stopped-upgrade-370000    | jenkins | v1.33.1 | 29 Jul 24 05:04 PDT | 29 Jul 24 05:04 PDT |
	| start   | -p pause-031000 --memory=2048                        | pause-031000              | jenkins | v1.33.1 | 29 Jul 24 05:04 PDT |                     |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                            |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 05:04:05
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 05:04:05.930998   23552 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:04:05.931127   23552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:04:05.931129   23552 out.go:304] Setting ErrFile to fd 2...
	I0729 05:04:05.931130   23552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:04:05.931273   23552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:04:05.932381   23552 out.go:298] Setting JSON to false
	I0729 05:04:05.949882   23552 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11014,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:04:05.949973   23552 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:04:05.954775   23552 out.go:177] * [pause-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:04:05.962841   23552 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:04:05.962876   23552 notify.go:220] Checking for updates...
	I0729 05:04:05.970803   23552 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:04:05.973807   23552 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:04:05.976764   23552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:04:05.979742   23552 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:04:05.982664   23552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:04:05.986013   23552 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:04:05.986072   23552 config.go:182] Loaded profile config "running-upgrade-965000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 05:04:05.986112   23552 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:04:05.990769   23552 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:04:05.997788   23552 start.go:297] selected driver: qemu2
	I0729 05:04:05.997791   23552 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:04:05.997795   23552 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:04:06.000364   23552 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:04:06.003743   23552 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:04:06.005346   23552 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:04:06.005367   23552 cni.go:84] Creating CNI manager for ""
	I0729 05:04:06.005373   23552 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:04:06.005381   23552 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 05:04:06.005404   23552 start.go:340] cluster config:
	{Name:pause-031000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:04:06.009419   23552 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:04:06.013784   23552 out.go:177] * Starting "pause-031000" primary control-plane node in "pause-031000" cluster
	I0729 05:04:06.017783   23552 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:04:06.017800   23552 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:04:06.017821   23552 cache.go:56] Caching tarball of preloaded images
	I0729 05:04:06.017895   23552 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:04:06.017898   23552 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:04:06.017959   23552 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/pause-031000/config.json ...
	I0729 05:04:06.017968   23552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/pause-031000/config.json: {Name:mkbfdb3018abe84e7bd5012ce84bc7e24232f90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:04:06.018237   23552 start.go:360] acquireMachinesLock for pause-031000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:04:06.018265   23552 start.go:364] duration metric: took 24.833µs to acquireMachinesLock for "pause-031000"
	I0729 05:04:06.018273   23552 start.go:93] Provisioning new machine with config: &{Name:pause-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:pause-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:04:06.018306   23552 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:04:06.025762   23552 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 05:04:06.048504   23552 start.go:159] libmachine.API.Create for "pause-031000" (driver="qemu2")
	I0729 05:04:06.048530   23552 client.go:168] LocalClient.Create starting
	I0729 05:04:06.048595   23552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:04:06.048622   23552 main.go:141] libmachine: Decoding PEM data...
	I0729 05:04:06.048633   23552 main.go:141] libmachine: Parsing certificate...
	I0729 05:04:06.048675   23552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:04:06.048696   23552 main.go:141] libmachine: Decoding PEM data...
	I0729 05:04:06.048704   23552 main.go:141] libmachine: Parsing certificate...
	I0729 05:04:06.049021   23552 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:04:06.196660   23552 main.go:141] libmachine: Creating SSH key...
	I0729 05:04:06.260177   23552 main.go:141] libmachine: Creating Disk image...
	I0729 05:04:06.260181   23552 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:04:06.260339   23552 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/pause-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/pause-031000/disk.qcow2
	I0729 05:04:06.270023   23552 main.go:141] libmachine: STDOUT: 
	I0729 05:04:06.270039   23552 main.go:141] libmachine: STDERR: 
	I0729 05:04:06.270084   23552 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/pause-031000/disk.qcow2 +20000M
	I0729 05:04:06.278413   23552 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:04:06.278423   23552 main.go:141] libmachine: STDERR: 
	I0729 05:04:06.278447   23552 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/pause-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/pause-031000/disk.qcow2
	I0729 05:04:06.278451   23552 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:04:06.278467   23552 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:04:06.278494   23552 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/pause-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/pause-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/pause-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:84:4c:85:a6:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/pause-031000/disk.qcow2
	I0729 05:04:06.280333   23552 main.go:141] libmachine: STDOUT: 
	I0729 05:04:06.280344   23552 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:04:06.280367   23552 client.go:171] duration metric: took 231.8375ms to LocalClient.Create
	I0729 05:04:08.282529   23552 start.go:128] duration metric: took 2.264238375s to createHost
	I0729 05:04:08.282563   23552 start.go:83] releasing machines lock for "pause-031000", held for 2.264333125s
	W0729 05:04:08.282753   23552 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:04:08.295779   23552 out.go:177] * Deleting "pause-031000" in qemu2 ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-07-29 11:54:28 UTC, ends at Mon 2024-07-29 12:04:09 UTC. --
	Jul 29 12:03:53 running-upgrade-965000 dockerd[4479]: time="2024-07-29T12:03:53.525152331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 12:03:53 running-upgrade-965000 dockerd[4479]: time="2024-07-29T12:03:53.525257747Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6c2bbd6f52cf23a50e96865c83c2568277a41ee1e3a73cf8bd0f0b67fb05db56 pid=18820 runtime=io.containerd.runc.v2
	Jul 29 12:03:53 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:03:53Z" level=error msg="ContainerStats resp: {0x400084ab40 linux}"
	Jul 29 12:03:53 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:03:53Z" level=error msg="ContainerStats resp: {0x40007a4e00 linux}"
	Jul 29 12:03:54 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:03:54Z" level=error msg="ContainerStats resp: {0x40001e4080 linux}"
	Jul 29 12:03:55 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:03:55Z" level=error msg="ContainerStats resp: {0x400063ec80 linux}"
	Jul 29 12:03:55 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:03:55Z" level=error msg="ContainerStats resp: {0x40001e5180 linux}"
	Jul 29 12:03:55 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:03:55Z" level=error msg="ContainerStats resp: {0x40001e5580 linux}"
	Jul 29 12:03:55 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:03:55Z" level=error msg="ContainerStats resp: {0x400063fb40 linux}"
	Jul 29 12:03:55 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:03:55Z" level=error msg="ContainerStats resp: {0x40001e5f80 linux}"
	Jul 29 12:03:55 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:03:55Z" level=error msg="ContainerStats resp: {0x4000924180 linux}"
	Jul 29 12:03:55 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:03:55Z" level=error msg="ContainerStats resp: {0x40009243c0 linux}"
	Jul 29 12:03:57 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:03:57Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 12:04:02 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:04:02Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 12:04:05 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:04:05Z" level=error msg="ContainerStats resp: {0x400063e280 linux}"
	Jul 29 12:04:05 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:04:05Z" level=error msg="ContainerStats resp: {0x400063eb00 linux}"
	Jul 29 12:04:06 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:04:06Z" level=error msg="ContainerStats resp: {0x4000924140 linux}"
	Jul 29 12:04:07 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:04:07Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 12:04:08 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:04:08Z" level=error msg="ContainerStats resp: {0x40009253c0 linux}"
	Jul 29 12:04:08 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:04:08Z" level=error msg="ContainerStats resp: {0x400063ed40 linux}"
	Jul 29 12:04:08 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:04:08Z" level=error msg="ContainerStats resp: {0x400063f1c0 linux}"
	Jul 29 12:04:08 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:04:08Z" level=error msg="ContainerStats resp: {0x400063f600 linux}"
	Jul 29 12:04:08 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:04:08Z" level=error msg="ContainerStats resp: {0x4000730600 linux}"
	Jul 29 12:04:08 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:04:08Z" level=error msg="ContainerStats resp: {0x4000730a40 linux}"
	Jul 29 12:04:08 running-upgrade-965000 cri-dockerd[4242]: time="2024-07-29T12:04:08Z" level=error msg="ContainerStats resp: {0x400084a500 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	6b7aae7a82280       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   0a90da3d345b4
	6c2bbd6f52cf2       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   de3e447f647bb
	389e780b420bd       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   0a90da3d345b4
	d3c97d2c207a8       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   de3e447f647bb
	9d81d423ae68d       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   1755803b63c33
	661ce72fecba7       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   8175e81d0d5fb
	fb4efc2f298be       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   72af6d7b89e1b
	be5618e93173d       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   3b371b15f04cb
	d71d59296a6da       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   d9e469b84f53b
	5967b8d04dabd       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   5befb7d3d7cfd
	
	
	==> coredns [389e780b420b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1761007373374038763.4956424590026093012. HINFO: read udp 10.244.0.2:47769->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1761007373374038763.4956424590026093012. HINFO: read udp 10.244.0.2:37787->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1761007373374038763.4956424590026093012. HINFO: read udp 10.244.0.2:48218->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1761007373374038763.4956424590026093012. HINFO: read udp 10.244.0.2:55856->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1761007373374038763.4956424590026093012. HINFO: read udp 10.244.0.2:37890->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1761007373374038763.4956424590026093012. HINFO: read udp 10.244.0.2:52903->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1761007373374038763.4956424590026093012. HINFO: read udp 10.244.0.2:58310->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1761007373374038763.4956424590026093012. HINFO: read udp 10.244.0.2:52306->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1761007373374038763.4956424590026093012. HINFO: read udp 10.244.0.2:37440->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1761007373374038763.4956424590026093012. HINFO: read udp 10.244.0.2:59568->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6b7aae7a8228] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2353955237996282876.8671603658750991248. HINFO: read udp 10.244.0.2:57056->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2353955237996282876.8671603658750991248. HINFO: read udp 10.244.0.2:60055->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2353955237996282876.8671603658750991248. HINFO: read udp 10.244.0.2:50707->10.0.2.3:53: i/o timeout
	
	
	==> coredns [6c2bbd6f52cf] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6026049884592683082.448065392767841183. HINFO: read udp 10.244.0.3:49577->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6026049884592683082.448065392767841183. HINFO: read udp 10.244.0.3:38603->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6026049884592683082.448065392767841183. HINFO: read udp 10.244.0.3:50152->10.0.2.3:53: i/o timeout
	
	
	==> coredns [d3c97d2c207a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5786450807057314990.1590738926038480093. HINFO: read udp 10.244.0.3:43973->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5786450807057314990.1590738926038480093. HINFO: read udp 10.244.0.3:34086->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5786450807057314990.1590738926038480093. HINFO: read udp 10.244.0.3:45525->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5786450807057314990.1590738926038480093. HINFO: read udp 10.244.0.3:52514->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5786450807057314990.1590738926038480093. HINFO: read udp 10.244.0.3:44870->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5786450807057314990.1590738926038480093. HINFO: read udp 10.244.0.3:46480->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5786450807057314990.1590738926038480093. HINFO: read udp 10.244.0.3:56003->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5786450807057314990.1590738926038480093. HINFO: read udp 10.244.0.3:46223->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5786450807057314990.1590738926038480093. HINFO: read udp 10.244.0.3:42138->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5786450807057314990.1590738926038480093. HINFO: read udp 10.244.0.3:42002->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-965000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-965000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=running-upgrade-965000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T04_59_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:59:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-965000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:04:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:59:52 +0000   Mon, 29 Jul 2024 11:59:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:59:52 +0000   Mon, 29 Jul 2024 11:59:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:59:52 +0000   Mon, 29 Jul 2024 11:59:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:59:52 +0000   Mon, 29 Jul 2024 11:59:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-965000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 55bbcf5fc1e74de4bffca84f1e6dd4c2
	  System UUID:                55bbcf5fc1e74de4bffca84f1e6dd4c2
	  Boot ID:                    f1747272-573a-4af6-902f-0013c3133fc3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-76lhv                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-vbhr4                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-965000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-965000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-965000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-h8wq6                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-965000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-965000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-965000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-965000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-965000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-965000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-965000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-965000 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-965000 event: Registered Node running-upgrade-965000 in Controller
	
	
	==> dmesg <==
	[  +0.068103] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +0.063199] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +1.120277] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.089300] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.079298] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +1.947844] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[  +9.648723] systemd-fstab-generator[1908]: Ignoring "noauto" for root device
	[Jul29 11:55] kauditd_printk_skb: 47 callbacks suppressed
	[  +1.262090] systemd-fstab-generator[2496]: Ignoring "noauto" for root device
	[  +0.212902] systemd-fstab-generator[2542]: Ignoring "noauto" for root device
	[  +0.099260] systemd-fstab-generator[2553]: Ignoring "noauto" for root device
	[  +0.129917] systemd-fstab-generator[2605]: Ignoring "noauto" for root device
	[  +6.202754] kauditd_printk_skb: 11 callbacks suppressed
	[ +15.582825] systemd-fstab-generator[4199]: Ignoring "noauto" for root device
	[  +0.084629] systemd-fstab-generator[4210]: Ignoring "noauto" for root device
	[  +0.083868] systemd-fstab-generator[4221]: Ignoring "noauto" for root device
	[  +0.096085] systemd-fstab-generator[4235]: Ignoring "noauto" for root device
	[  +2.391856] systemd-fstab-generator[4464]: Ignoring "noauto" for root device
	[  +1.361064] systemd-fstab-generator[4817]: Ignoring "noauto" for root device
	[  +1.192799] systemd-fstab-generator[4944]: Ignoring "noauto" for root device
	[  +1.896018] kauditd_printk_skb: 77 callbacks suppressed
	[ +15.751016] kauditd_printk_skb: 3 callbacks suppressed
	[Jul29 11:59] systemd-fstab-generator[11944]: Ignoring "noauto" for root device
	[  +6.155324] systemd-fstab-generator[12582]: Ignoring "noauto" for root device
	[  +0.461824] systemd-fstab-generator[12714]: Ignoring "noauto" for root device
	
	
	==> etcd [fb4efc2f298b] <==
	{"level":"info","ts":"2024-07-29T11:59:47.171Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-29T11:59:47.171Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-29T11:59:47.172Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T11:59:47.172Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T11:59:47.172Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T11:59:47.172Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T11:59:47.172Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T11:59:48.145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T11:59:48.145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T11:59:48.145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-29T11:59:48.145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T11:59:48.145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T11:59:48.145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T11:59:48.145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T11:59:48.145Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-965000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T11:59:48.145Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:59:48.146Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-29T11:59:48.146Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:59:48.146Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:59:48.146Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T11:59:48.147Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:59:48.147Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:59:48.147Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:59:48.147Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:59:48.147Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 12:04:09 up 9 min,  0 users,  load average: 0.81, 0.67, 0.33
	Linux running-upgrade-965000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d71d59296a6d] <==
	I0729 11:59:49.367968       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0729 11:59:49.389180       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 11:59:49.389901       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 11:59:49.390100       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 11:59:49.390138       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 11:59:49.390490       1 cache.go:39] Caches are synced for autoregister controller
	I0729 11:59:49.400307       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0729 11:59:50.144059       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 11:59:50.298376       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0729 11:59:50.303929       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0729 11:59:50.304113       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 11:59:50.452562       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 11:59:50.462654       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 11:59:50.562341       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0729 11:59:50.564147       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0729 11:59:50.564564       1 controller.go:611] quota admission added evaluator for: endpoints
	I0729 11:59:50.565821       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 11:59:51.434755       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 11:59:52.179201       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 11:59:52.182498       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0729 11:59:52.186941       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 11:59:52.233875       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 12:00:04.832170       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0729 12:00:04.932877       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0729 12:00:05.914324       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [be5618e93173] <==
	I0729 12:00:04.332048       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0729 12:00:04.333167       1 event.go:294] "Event occurred" object="running-upgrade-965000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-965000 event: Registered Node running-upgrade-965000 in Controller"
	I0729 12:00:04.337408       1 shared_informer.go:262] Caches are synced for TTL
	I0729 12:00:04.340267       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0729 12:00:04.340495       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0729 12:00:04.340512       1 shared_informer.go:262] Caches are synced for PV protection
	I0729 12:00:04.340544       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0729 12:00:04.340554       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0729 12:00:04.341622       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0729 12:00:04.341662       1 shared_informer.go:262] Caches are synced for HPA
	I0729 12:00:04.341791       1 shared_informer.go:262] Caches are synced for crt configmap
	I0729 12:00:04.401112       1 shared_informer.go:262] Caches are synced for attach detach
	I0729 12:00:04.480950       1 shared_informer.go:262] Caches are synced for persistent volume
	I0729 12:00:04.517221       1 shared_informer.go:262] Caches are synced for endpoint
	I0729 12:00:04.530559       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0729 12:00:04.531616       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0729 12:00:04.535458       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 12:00:04.557624       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 12:00:04.835297       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h8wq6"
	I0729 12:00:04.934810       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0729 12:00:04.944698       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 12:00:05.022396       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 12:00:05.022685       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0729 12:00:05.337196       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-76lhv"
	I0729 12:00:05.341259       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vbhr4"
	
	
	==> kube-proxy [661ce72fecba] <==
	I0729 12:00:05.903675       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0729 12:00:05.903700       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0729 12:00:05.903709       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 12:00:05.911900       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 12:00:05.911910       1 server_others.go:206] "Using iptables Proxier"
	I0729 12:00:05.911921       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 12:00:05.912012       1 server.go:661] "Version info" version="v1.24.1"
	I0729 12:00:05.912016       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:00:05.912237       1 config.go:317] "Starting service config controller"
	I0729 12:00:05.912249       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 12:00:05.912257       1 config.go:226] "Starting endpoint slice config controller"
	I0729 12:00:05.912259       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 12:00:05.912475       1 config.go:444] "Starting node config controller"
	I0729 12:00:05.912478       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 12:00:06.013575       1 shared_informer.go:262] Caches are synced for service config
	I0729 12:00:06.013600       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0729 12:00:06.013688       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [5967b8d04dab] <==
	W0729 11:59:49.349663       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:59:49.349672       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:59:49.349667       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:59:49.349697       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 11:59:49.349714       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:59:49.349725       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:59:49.349742       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 11:59:49.349764       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 11:59:49.349743       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:59:49.349796       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 11:59:49.349821       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 11:59:49.349828       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 11:59:49.349851       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:59:49.349855       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:59:49.349867       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:59:49.349870       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 11:59:49.349926       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:59:49.349942       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 11:59:50.210856       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:59:50.211098       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 11:59:50.246809       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 11:59:50.246921       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 11:59:50.336621       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:59:50.336647       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0729 11:59:50.641641       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-07-29 11:54:28 UTC, ends at Mon 2024-07-29 12:04:09 UTC. --
	Jul 29 12:00:04 running-upgrade-965000 kubelet[12588]: I0729 12:00:04.515886   12588 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9r9t\" (UniqueName: \"kubernetes.io/projected/b6bc92a3-f83b-468d-af8b-d8fe686d3b4e-kube-api-access-v9r9t\") pod \"storage-provisioner\" (UID: \"b6bc92a3-f83b-468d-af8b-d8fe686d3b4e\") " pod="kube-system/storage-provisioner"
	Jul 29 12:00:04 running-upgrade-965000 kubelet[12588]: I0729 12:00:04.515920   12588 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b6bc92a3-f83b-468d-af8b-d8fe686d3b4e-tmp\") pod \"storage-provisioner\" (UID: \"b6bc92a3-f83b-468d-af8b-d8fe686d3b4e\") " pod="kube-system/storage-provisioner"
	Jul 29 12:00:04 running-upgrade-965000 kubelet[12588]: E0729 12:00:04.622684   12588 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 29 12:00:04 running-upgrade-965000 kubelet[12588]: E0729 12:00:04.622707   12588 projected.go:192] Error preparing data for projected volume kube-api-access-v9r9t for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 29 12:00:04 running-upgrade-965000 kubelet[12588]: E0729 12:00:04.622744   12588 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b6bc92a3-f83b-468d-af8b-d8fe686d3b4e-kube-api-access-v9r9t podName:b6bc92a3-f83b-468d-af8b-d8fe686d3b4e nodeName:}" failed. No retries permitted until 2024-07-29 12:00:05.122731769 +0000 UTC m=+12.953768532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v9r9t" (UniqueName: "kubernetes.io/projected/b6bc92a3-f83b-468d-af8b-d8fe686d3b4e-kube-api-access-v9r9t") pod "storage-provisioner" (UID: "b6bc92a3-f83b-468d-af8b-d8fe686d3b4e") : configmap "kube-root-ca.crt" not found
	Jul 29 12:00:04 running-upgrade-965000 kubelet[12588]: I0729 12:00:04.838123   12588 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: I0729 12:00:05.022560   12588 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pjls\" (UniqueName: \"kubernetes.io/projected/e58b82bf-4a3e-47c9-941f-a83e5dcae5cd-kube-api-access-6pjls\") pod \"kube-proxy-h8wq6\" (UID: \"e58b82bf-4a3e-47c9-941f-a83e5dcae5cd\") " pod="kube-system/kube-proxy-h8wq6"
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: I0729 12:00:05.022590   12588 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e58b82bf-4a3e-47c9-941f-a83e5dcae5cd-lib-modules\") pod \"kube-proxy-h8wq6\" (UID: \"e58b82bf-4a3e-47c9-941f-a83e5dcae5cd\") " pod="kube-system/kube-proxy-h8wq6"
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: I0729 12:00:05.022611   12588 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e58b82bf-4a3e-47c9-941f-a83e5dcae5cd-xtables-lock\") pod \"kube-proxy-h8wq6\" (UID: \"e58b82bf-4a3e-47c9-941f-a83e5dcae5cd\") " pod="kube-system/kube-proxy-h8wq6"
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: I0729 12:00:05.022623   12588 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e58b82bf-4a3e-47c9-941f-a83e5dcae5cd-kube-proxy\") pod \"kube-proxy-h8wq6\" (UID: \"e58b82bf-4a3e-47c9-941f-a83e5dcae5cd\") " pod="kube-system/kube-proxy-h8wq6"
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: E0729 12:00:05.123619   12588 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: E0729 12:00:05.123718   12588 projected.go:192] Error preparing data for projected volume kube-api-access-v9r9t for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: E0729 12:00:05.123782   12588 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b6bc92a3-f83b-468d-af8b-d8fe686d3b4e-kube-api-access-v9r9t podName:b6bc92a3-f83b-468d-af8b-d8fe686d3b4e nodeName:}" failed. No retries permitted until 2024-07-29 12:00:06.123747568 +0000 UTC m=+13.954784330 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v9r9t" (UniqueName: "kubernetes.io/projected/b6bc92a3-f83b-468d-af8b-d8fe686d3b4e-kube-api-access-v9r9t") pod "storage-provisioner" (UID: "b6bc92a3-f83b-468d-af8b-d8fe686d3b4e") : configmap "kube-root-ca.crt" not found
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: E0729 12:00:05.127253   12588 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: E0729 12:00:05.127298   12588 projected.go:192] Error preparing data for projected volume kube-api-access-6pjls for pod kube-system/kube-proxy-h8wq6: configmap "kube-root-ca.crt" not found
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: E0729 12:00:05.127349   12588 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/e58b82bf-4a3e-47c9-941f-a83e5dcae5cd-kube-api-access-6pjls podName:e58b82bf-4a3e-47c9-941f-a83e5dcae5cd nodeName:}" failed. No retries permitted until 2024-07-29 12:00:05.627341068 +0000 UTC m=+13.458377830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6pjls" (UniqueName: "kubernetes.io/projected/e58b82bf-4a3e-47c9-941f-a83e5dcae5cd-kube-api-access-6pjls") pod "kube-proxy-h8wq6" (UID: "e58b82bf-4a3e-47c9-941f-a83e5dcae5cd") : configmap "kube-root-ca.crt" not found
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: I0729 12:00:05.342156   12588 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: I0729 12:00:05.342933   12588 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: I0729 12:00:05.529125   12588 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn2rr\" (UniqueName: \"kubernetes.io/projected/55a68811-05e3-49a6-97af-1ebef2134eaa-kube-api-access-fn2rr\") pod \"coredns-6d4b75cb6d-vbhr4\" (UID: \"55a68811-05e3-49a6-97af-1ebef2134eaa\") " pod="kube-system/coredns-6d4b75cb6d-vbhr4"
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: I0729 12:00:05.529262   12588 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8b1bff7-8f76-450f-9c00-71c056e6e852-config-volume\") pod \"coredns-6d4b75cb6d-76lhv\" (UID: \"c8b1bff7-8f76-450f-9c00-71c056e6e852\") " pod="kube-system/coredns-6d4b75cb6d-76lhv"
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: I0729 12:00:05.529280   12588 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55a68811-05e3-49a6-97af-1ebef2134eaa-config-volume\") pod \"coredns-6d4b75cb6d-vbhr4\" (UID: \"55a68811-05e3-49a6-97af-1ebef2134eaa\") " pod="kube-system/coredns-6d4b75cb6d-vbhr4"
	Jul 29 12:00:05 running-upgrade-965000 kubelet[12588]: I0729 12:00:05.529298   12588 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxb6d\" (UniqueName: \"kubernetes.io/projected/c8b1bff7-8f76-450f-9c00-71c056e6e852-kube-api-access-vxb6d\") pod \"coredns-6d4b75cb6d-76lhv\" (UID: \"c8b1bff7-8f76-450f-9c00-71c056e6e852\") " pod="kube-system/coredns-6d4b75cb6d-76lhv"
	Jul 29 12:00:06 running-upgrade-965000 kubelet[12588]: I0729 12:00:06.461346   12588 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="de3e447f647bb2e910813165c6ea796b857b53507d6efb99d503e64ed60319b5"
	Jul 29 12:03:53 running-upgrade-965000 kubelet[12588]: I0729 12:03:53.790771   12588 scope.go:110] "RemoveContainer" containerID="aa845cccf85f4e07481bbbff3e80e608702f3341a1180a26481b65d5f9d77982"
	Jul 29 12:03:53 running-upgrade-965000 kubelet[12588]: I0729 12:03:53.801483   12588 scope.go:110] "RemoveContainer" containerID="3e75dce8d6fb1363e89bc7a31066034d53d3ddc17003206f8dd9169a0e87cd0c"
	
	
	==> storage-provisioner [9d81d423ae68] <==
	I0729 12:00:06.420896       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 12:00:06.425820       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 12:00:06.425899       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 12:00:06.434454       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 12:00:06.434594       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5d1c4f09-ad17-46d0-8190-c5e516bb16fd", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-965000_a52499f9-1a33-4245-a48e-0e3ad5892bee became leader
	I0729 12:00:06.435496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-965000_a52499f9-1a33-4245-a48e-0e3ad5892bee!
	I0729 12:00:06.535717       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-965000_a52499f9-1a33-4245-a48e-0e3ad5892bee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-965000 -n running-upgrade-965000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-965000 -n running-upgrade-965000: exit status 2 (15.577002166s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-965000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-965000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-965000
--- FAIL: TestRunningBinaryUpgrade (635.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.53s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.989083708s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-530000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-530000" primary control-plane node in "kubernetes-upgrade-530000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-530000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:53:32.715007   23044 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:53:32.715133   23044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:53:32.715136   23044 out.go:304] Setting ErrFile to fd 2...
	I0729 04:53:32.715139   23044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:53:32.715272   23044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:53:32.716340   23044 out.go:298] Setting JSON to false
	I0729 04:53:32.732490   23044 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10381,"bootTime":1722243631,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:53:32.732588   23044 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:53:32.737511   23044 out.go:177] * [kubernetes-upgrade-530000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:53:32.740524   23044 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:53:32.740572   23044 notify.go:220] Checking for updates...
	I0729 04:53:32.747473   23044 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:53:32.748894   23044 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:53:32.751609   23044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:53:32.754477   23044 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:53:32.757446   23044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:53:32.760867   23044 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:53:32.760926   23044 config.go:182] Loaded profile config "offline-docker-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:53:32.760974   23044 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:53:32.765394   23044 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:53:32.772398   23044 start.go:297] selected driver: qemu2
	I0729 04:53:32.772404   23044 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:53:32.772415   23044 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:53:32.774671   23044 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:53:32.777433   23044 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:53:32.780551   23044 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:53:32.780585   23044 cni.go:84] Creating CNI manager for ""
	I0729 04:53:32.780594   23044 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 04:53:32.780623   23044 start.go:340] cluster config:
	{Name:kubernetes-upgrade-530000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-530000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:53:32.784326   23044 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:53:32.792430   23044 out.go:177] * Starting "kubernetes-upgrade-530000" primary control-plane node in "kubernetes-upgrade-530000" cluster
	I0729 04:53:32.795418   23044 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:53:32.795434   23044 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:53:32.795448   23044 cache.go:56] Caching tarball of preloaded images
	I0729 04:53:32.795509   23044 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:53:32.795517   23044 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 04:53:32.795584   23044 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/kubernetes-upgrade-530000/config.json ...
	I0729 04:53:32.795598   23044 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/kubernetes-upgrade-530000/config.json: {Name:mk8c777efceda0881c96cd5c1c7e2f4ddc08d3ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:53:32.796007   23044 start.go:360] acquireMachinesLock for kubernetes-upgrade-530000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:53:32.876563   23044 start.go:364] duration metric: took 80.545834ms to acquireMachinesLock for "kubernetes-upgrade-530000"
	I0729 04:53:32.876599   23044 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-530000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-530000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:53:32.876694   23044 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:53:32.881025   23044 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:53:32.908855   23044 start.go:159] libmachine.API.Create for "kubernetes-upgrade-530000" (driver="qemu2")
	I0729 04:53:32.908890   23044 client.go:168] LocalClient.Create starting
	I0729 04:53:32.908977   23044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 04:53:32.909024   23044 main.go:141] libmachine: Decoding PEM data...
	I0729 04:53:32.909041   23044 main.go:141] libmachine: Parsing certificate...
	I0729 04:53:32.909100   23044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 04:53:32.909135   23044 main.go:141] libmachine: Decoding PEM data...
	I0729 04:53:32.909151   23044 main.go:141] libmachine: Parsing certificate...
	I0729 04:53:32.911354   23044 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:53:33.084776   23044 main.go:141] libmachine: Creating SSH key...
	I0729 04:53:33.221977   23044 main.go:141] libmachine: Creating Disk image...
	I0729 04:53:33.221983   23044 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:53:33.222214   23044 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I0729 04:53:33.231720   23044 main.go:141] libmachine: STDOUT: 
	I0729 04:53:33.231737   23044 main.go:141] libmachine: STDERR: 
	I0729 04:53:33.231791   23044 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2 +20000M
	I0729 04:53:33.239668   23044 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:53:33.239684   23044 main.go:141] libmachine: STDERR: 
	I0729 04:53:33.239698   23044 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I0729 04:53:33.239702   23044 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:53:33.239719   23044 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:53:33.239748   23044 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:74:08:1d:ae:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I0729 04:53:33.241336   23044 main.go:141] libmachine: STDOUT: 
	I0729 04:53:33.241355   23044 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:53:33.241374   23044 client.go:171] duration metric: took 332.486292ms to LocalClient.Create
	I0729 04:53:35.243515   23044 start.go:128] duration metric: took 2.366857333s to createHost
	I0729 04:53:35.243587   23044 start.go:83] releasing machines lock for "kubernetes-upgrade-530000", held for 2.367061292s
	W0729 04:53:35.243671   23044 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:53:35.250866   23044 out.go:177] * Deleting "kubernetes-upgrade-530000" in qemu2 ...
	W0729 04:53:35.291102   23044 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:53:35.291132   23044 start.go:729] Will try again in 5 seconds ...
	I0729 04:53:40.292738   23044 start.go:360] acquireMachinesLock for kubernetes-upgrade-530000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:53:40.292917   23044 start.go:364] duration metric: took 127.792µs to acquireMachinesLock for "kubernetes-upgrade-530000"
	I0729 04:53:40.292953   23044 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-530000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-530000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:53:40.293054   23044 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:53:40.301218   23044 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:53:40.327544   23044 start.go:159] libmachine.API.Create for "kubernetes-upgrade-530000" (driver="qemu2")
	I0729 04:53:40.327573   23044 client.go:168] LocalClient.Create starting
	I0729 04:53:40.327635   23044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 04:53:40.327671   23044 main.go:141] libmachine: Decoding PEM data...
	I0729 04:53:40.327683   23044 main.go:141] libmachine: Parsing certificate...
	I0729 04:53:40.327721   23044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 04:53:40.327742   23044 main.go:141] libmachine: Decoding PEM data...
	I0729 04:53:40.327753   23044 main.go:141] libmachine: Parsing certificate...
	I0729 04:53:40.328126   23044 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:53:40.513211   23044 main.go:141] libmachine: Creating SSH key...
	I0729 04:53:40.624393   23044 main.go:141] libmachine: Creating Disk image...
	I0729 04:53:40.624399   23044 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:53:40.624568   23044 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I0729 04:53:40.633951   23044 main.go:141] libmachine: STDOUT: 
	I0729 04:53:40.633966   23044 main.go:141] libmachine: STDERR: 
	I0729 04:53:40.634023   23044 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2 +20000M
	I0729 04:53:40.641760   23044 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:53:40.641776   23044 main.go:141] libmachine: STDERR: 
	I0729 04:53:40.641786   23044 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I0729 04:53:40.641796   23044 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:53:40.641808   23044 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:53:40.641838   23044 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:82:23:bf:98:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I0729 04:53:40.643449   23044 main.go:141] libmachine: STDOUT: 
	I0729 04:53:40.643471   23044 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:53:40.643483   23044 client.go:171] duration metric: took 315.913625ms to LocalClient.Create
	I0729 04:53:42.645537   23044 start.go:128] duration metric: took 2.352523875s to createHost
	I0729 04:53:42.645588   23044 start.go:83] releasing machines lock for "kubernetes-upgrade-530000", held for 2.352709125s
	W0729 04:53:42.645678   23044 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-530000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-530000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:53:42.653243   23044 out.go:177] 
	W0729 04:53:42.657239   23044 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:53:42.657253   23044 out.go:239] * 
	* 
	W0729 04:53:42.657854   23044 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:53:42.668152   23044 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-530000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-530000: (2.079667542s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-530000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-530000 status --format={{.Host}}: exit status 7 (65.170375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.213531834s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-530000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-530000" primary control-plane node in "kubernetes-upgrade-530000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-530000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-530000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:53:44.853292   23089 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:53:44.853431   23089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:53:44.853435   23089 out.go:304] Setting ErrFile to fd 2...
	I0729 04:53:44.853437   23089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:53:44.853540   23089 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:53:44.854523   23089 out.go:298] Setting JSON to false
	I0729 04:53:44.870895   23089 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10393,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:53:44.870958   23089 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:53:44.875763   23089 out.go:177] * [kubernetes-upgrade-530000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:53:44.883773   23089 notify.go:220] Checking for updates...
	I0729 04:53:44.887636   23089 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:53:44.894652   23089 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:53:44.902660   23089 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:53:44.910615   23089 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:53:44.917481   23089 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:53:44.925620   23089 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:53:44.929890   23089 config.go:182] Loaded profile config "kubernetes-upgrade-530000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 04:53:44.930140   23089 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:53:44.934609   23089 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:53:44.942018   23089 start.go:297] selected driver: qemu2
	I0729 04:53:44.942025   23089 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-530000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-530000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:53:44.942083   23089 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:53:44.944410   23089 cni.go:84] Creating CNI manager for ""
	I0729 04:53:44.944429   23089 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:53:44.944459   23089 start.go:340] cluster config:
	{Name:kubernetes-upgrade-530000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-530000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:53:44.947920   23089 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:53:44.955647   23089 out.go:177] * Starting "kubernetes-upgrade-530000" primary control-plane node in "kubernetes-upgrade-530000" cluster
	I0729 04:53:44.959650   23089 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:53:44.959663   23089 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 04:53:44.959674   23089 cache.go:56] Caching tarball of preloaded images
	I0729 04:53:44.959726   23089 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:53:44.959731   23089 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 04:53:44.959786   23089 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/kubernetes-upgrade-530000/config.json ...
	I0729 04:53:44.960095   23089 start.go:360] acquireMachinesLock for kubernetes-upgrade-530000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:53:44.960129   23089 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "kubernetes-upgrade-530000"
	I0729 04:53:44.960138   23089 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:53:44.960143   23089 fix.go:54] fixHost starting: 
	I0729 04:53:44.960253   23089 fix.go:112] recreateIfNeeded on kubernetes-upgrade-530000: state=Stopped err=<nil>
	W0729 04:53:44.960261   23089 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:53:44.967165   23089 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-530000" ...
	I0729 04:53:44.970630   23089 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:53:44.970662   23089 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:82:23:bf:98:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I0729 04:53:44.972628   23089 main.go:141] libmachine: STDOUT: 
	I0729 04:53:44.972644   23089 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:53:44.972673   23089 fix.go:56] duration metric: took 12.53025ms for fixHost
	I0729 04:53:44.972677   23089 start.go:83] releasing machines lock for "kubernetes-upgrade-530000", held for 12.544292ms
	W0729 04:53:44.972682   23089 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:53:44.972711   23089 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:53:44.972715   23089 start.go:729] Will try again in 5 seconds ...
	I0729 04:53:49.973192   23089 start.go:360] acquireMachinesLock for kubernetes-upgrade-530000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:53:49.973740   23089 start.go:364] duration metric: took 421.917µs to acquireMachinesLock for "kubernetes-upgrade-530000"
	I0729 04:53:49.973905   23089 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:53:49.973927   23089 fix.go:54] fixHost starting: 
	I0729 04:53:49.974714   23089 fix.go:112] recreateIfNeeded on kubernetes-upgrade-530000: state=Stopped err=<nil>
	W0729 04:53:49.974742   23089 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:53:49.988703   23089 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-530000" ...
	I0729 04:53:49.993716   23089 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:53:49.994063   23089 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:82:23:bf:98:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I0729 04:53:50.004186   23089 main.go:141] libmachine: STDOUT: 
	I0729 04:53:50.004258   23089 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:53:50.004356   23089 fix.go:56] duration metric: took 30.430375ms for fixHost
	I0729 04:53:50.004377   23089 start.go:83] releasing machines lock for "kubernetes-upgrade-530000", held for 30.612958ms
	W0729 04:53:50.004577   23089 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-530000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-530000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:53:50.011653   23089 out.go:177] 
	W0729 04:53:50.014759   23089 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:53:50.014782   23089 out.go:239] * 
	* 
	W0729 04:53:50.017350   23089 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:53:50.023378   23089 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-530000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-530000 version --output=json: exit status 1 (63.998875ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-530000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-29 04:53:50.102421 -0700 PDT m=+660.940895292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-530000 -n kubernetes-upgrade-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-530000 -n kubernetes-upgrade-530000: exit status 7 (33.642541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-530000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-530000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-530000
--- FAIL: TestKubernetesUpgrade (17.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (591.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2135525971 start -p stopped-upgrade-370000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2135525971 start -p stopped-upgrade-370000 --memory=2200 --vm-driver=qemu2 : (54.1206985s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2135525971 -p stopped-upgrade-370000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2135525971 -p stopped-upgrade-370000 stop: (12.110665833s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-370000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-370000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m44.782399542s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-370000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-370000" primary control-plane node in "stopped-upgrade-370000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-370000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:54:47.901038   23138 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:54:47.901201   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:54:47.901205   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 04:54:47.901208   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:54:47.901366   23138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:54:47.902523   23138 out.go:298] Setting JSON to false
	I0729 04:54:47.921979   23138 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10456,"bootTime":1722243631,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:54:47.922057   23138 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:54:47.927485   23138 out.go:177] * [stopped-upgrade-370000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:54:47.934444   23138 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:54:47.934494   23138 notify.go:220] Checking for updates...
	I0729 04:54:47.942415   23138 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:54:47.945444   23138 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:54:47.948531   23138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:54:47.951527   23138 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:54:47.954481   23138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:54:47.957741   23138 config.go:182] Loaded profile config "stopped-upgrade-370000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:54:47.962433   23138 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 04:54:47.965494   23138 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:54:47.968649   23138 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:54:47.976517   23138 start.go:297] selected driver: qemu2
	I0729 04:54:47.976525   23138 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-370000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54107 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-370000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:54:47.976593   23138 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:54:47.979578   23138 cni.go:84] Creating CNI manager for ""
	I0729 04:54:47.979617   23138 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:54:47.979648   23138 start.go:340] cluster config:
	{Name:stopped-upgrade-370000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54107 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-370000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:54:47.979703   23138 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:54:47.988479   23138 out.go:177] * Starting "stopped-upgrade-370000" primary control-plane node in "stopped-upgrade-370000" cluster
	I0729 04:54:47.992493   23138 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:54:47.992516   23138 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 04:54:47.992530   23138 cache.go:56] Caching tarball of preloaded images
	I0729 04:54:47.992595   23138 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:54:47.992602   23138 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 04:54:47.992677   23138 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/config.json ...
	I0729 04:54:47.993170   23138 start.go:360] acquireMachinesLock for stopped-upgrade-370000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:54:47.993208   23138 start.go:364] duration metric: took 31.75µs to acquireMachinesLock for "stopped-upgrade-370000"
	I0729 04:54:47.993218   23138 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:54:47.993225   23138 fix.go:54] fixHost starting: 
	I0729 04:54:47.993345   23138 fix.go:112] recreateIfNeeded on stopped-upgrade-370000: state=Stopped err=<nil>
	W0729 04:54:47.993354   23138 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:54:48.001407   23138 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-370000" ...
	I0729 04:54:48.005325   23138 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:54:48.005409   23138 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/stopped-upgrade-370000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/stopped-upgrade-370000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/stopped-upgrade-370000/qemu.pid -nic user,model=virtio,hostfwd=tcp::54075-:22,hostfwd=tcp::54076-:2376,hostname=stopped-upgrade-370000 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/stopped-upgrade-370000/disk.qcow2
	I0729 04:54:48.058224   23138 main.go:141] libmachine: STDOUT: 
	I0729 04:54:48.058266   23138 main.go:141] libmachine: STDERR: 
	I0729 04:54:48.058272   23138 main.go:141] libmachine: Waiting for VM to start (ssh -p 54075 docker@127.0.0.1)...
	I0729 04:55:07.293878   23138 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/config.json ...
	I0729 04:55:07.294138   23138 machine.go:94] provisionDockerMachine start ...
	I0729 04:55:07.294193   23138 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:07.294354   23138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046a2a10] 0x1046a5270 <nil>  [] 0s} localhost 54075 <nil> <nil>}
	I0729 04:55:07.294361   23138 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 04:55:07.361066   23138 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 04:55:07.361088   23138 buildroot.go:166] provisioning hostname "stopped-upgrade-370000"
	I0729 04:55:07.361163   23138 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:07.361291   23138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046a2a10] 0x1046a5270 <nil>  [] 0s} localhost 54075 <nil> <nil>}
	I0729 04:55:07.361297   23138 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-370000 && echo "stopped-upgrade-370000" | sudo tee /etc/hostname
	I0729 04:55:07.428063   23138 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-370000
	
	I0729 04:55:07.428116   23138 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:07.428228   23138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046a2a10] 0x1046a5270 <nil>  [] 0s} localhost 54075 <nil> <nil>}
	I0729 04:55:07.428236   23138 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-370000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-370000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-370000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 04:55:07.494152   23138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 04:55:07.494166   23138 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19338-21024/.minikube CaCertPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19338-21024/.minikube}
	I0729 04:55:07.494182   23138 buildroot.go:174] setting up certificates
	I0729 04:55:07.494187   23138 provision.go:84] configureAuth start
	I0729 04:55:07.494195   23138 provision.go:143] copyHostCerts
	I0729 04:55:07.494282   23138 exec_runner.go:144] found /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.pem, removing ...
	I0729 04:55:07.494290   23138 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.pem
	I0729 04:55:07.494391   23138 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.pem (1078 bytes)
	I0729 04:55:07.494555   23138 exec_runner.go:144] found /Users/jenkins/minikube-integration/19338-21024/.minikube/cert.pem, removing ...
	I0729 04:55:07.494561   23138 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19338-21024/.minikube/cert.pem
	I0729 04:55:07.494607   23138 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19338-21024/.minikube/cert.pem (1123 bytes)
	I0729 04:55:07.494701   23138 exec_runner.go:144] found /Users/jenkins/minikube-integration/19338-21024/.minikube/key.pem, removing ...
	I0729 04:55:07.494705   23138 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19338-21024/.minikube/key.pem
	I0729 04:55:07.494742   23138 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19338-21024/.minikube/key.pem (1679 bytes)
	I0729 04:55:07.494836   23138 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-370000 san=[127.0.0.1 localhost minikube stopped-upgrade-370000]
	I0729 04:55:07.587883   23138 provision.go:177] copyRemoteCerts
	I0729 04:55:07.587947   23138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 04:55:07.587959   23138 sshutil.go:53] new ssh client: &{IP:localhost Port:54075 SSHKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/stopped-upgrade-370000/id_rsa Username:docker}
	I0729 04:55:07.622070   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 04:55:07.630102   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 04:55:07.638039   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 04:55:07.646069   23138 provision.go:87] duration metric: took 151.876625ms to configureAuth
	I0729 04:55:07.646082   23138 buildroot.go:189] setting minikube options for container-runtime
	I0729 04:55:07.646214   23138 config.go:182] Loaded profile config "stopped-upgrade-370000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:55:07.646261   23138 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:07.646351   23138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046a2a10] 0x1046a5270 <nil>  [] 0s} localhost 54075 <nil> <nil>}
	I0729 04:55:07.646357   23138 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 04:55:07.711865   23138 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 04:55:07.711880   23138 buildroot.go:70] root file system type: tmpfs
	I0729 04:55:07.711927   23138 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 04:55:07.711978   23138 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:07.712097   23138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046a2a10] 0x1046a5270 <nil>  [] 0s} localhost 54075 <nil> <nil>}
	I0729 04:55:07.712135   23138 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 04:55:07.775623   23138 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 04:55:07.775692   23138 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:07.775808   23138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046a2a10] 0x1046a5270 <nil>  [] 0s} localhost 54075 <nil> <nil>}
	I0729 04:55:07.775817   23138 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 04:55:08.134514   23138 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 04:55:08.134528   23138 machine.go:97] duration metric: took 840.401417ms to provisionDockerMachine
	I0729 04:55:08.134536   23138 start.go:293] postStartSetup for "stopped-upgrade-370000" (driver="qemu2")
	I0729 04:55:08.134542   23138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 04:55:08.134605   23138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 04:55:08.134614   23138 sshutil.go:53] new ssh client: &{IP:localhost Port:54075 SSHKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/stopped-upgrade-370000/id_rsa Username:docker}
	I0729 04:55:08.168605   23138 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 04:55:08.169943   23138 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 04:55:08.169952   23138 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19338-21024/.minikube/addons for local assets ...
	I0729 04:55:08.170052   23138 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19338-21024/.minikube/files for local assets ...
	I0729 04:55:08.170150   23138 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19338-21024/.minikube/files/etc/ssl/certs/215082.pem -> 215082.pem in /etc/ssl/certs
	I0729 04:55:08.170253   23138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 04:55:08.173186   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/files/etc/ssl/certs/215082.pem --> /etc/ssl/certs/215082.pem (1708 bytes)
	I0729 04:55:08.181281   23138 start.go:296] duration metric: took 46.739208ms for postStartSetup
	I0729 04:55:08.181298   23138 fix.go:56] duration metric: took 20.188550166s for fixHost
	I0729 04:55:08.181346   23138 main.go:141] libmachine: Using SSH client type: native
	I0729 04:55:08.181469   23138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046a2a10] 0x1046a5270 <nil>  [] 0s} localhost 54075 <nil> <nil>}
	I0729 04:55:08.181474   23138 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 04:55:08.246682   23138 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722254108.375801629
	
	I0729 04:55:08.246693   23138 fix.go:216] guest clock: 1722254108.375801629
	I0729 04:55:08.246698   23138 fix.go:229] Guest: 2024-07-29 04:55:08.375801629 -0700 PDT Remote: 2024-07-29 04:55:08.1813 -0700 PDT m=+20.311693960 (delta=194.501629ms)
	I0729 04:55:08.246713   23138 fix.go:200] guest clock delta is within tolerance: 194.501629ms
	I0729 04:55:08.246716   23138 start.go:83] releasing machines lock for "stopped-upgrade-370000", held for 20.253980875s
	I0729 04:55:08.246784   23138 ssh_runner.go:195] Run: cat /version.json
	I0729 04:55:08.246792   23138 sshutil.go:53] new ssh client: &{IP:localhost Port:54075 SSHKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/stopped-upgrade-370000/id_rsa Username:docker}
	I0729 04:55:08.246860   23138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 04:55:08.246878   23138 sshutil.go:53] new ssh client: &{IP:localhost Port:54075 SSHKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/stopped-upgrade-370000/id_rsa Username:docker}
	W0729 04:55:08.247619   23138 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:54256->127.0.0.1:54075: read: connection reset by peer
	I0729 04:55:08.247637   23138 retry.go:31] will retry after 226.501825ms: ssh: handshake failed: read tcp 127.0.0.1:54256->127.0.0.1:54075: read: connection reset by peer
	W0729 04:55:08.507448   23138 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 04:55:08.507505   23138 ssh_runner.go:195] Run: systemctl --version
	I0729 04:55:08.509306   23138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 04:55:08.511102   23138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 04:55:08.511129   23138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 04:55:08.514157   23138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 04:55:08.519130   23138 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 04:55:08.519140   23138 start.go:495] detecting cgroup driver to use...
	I0729 04:55:08.519268   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:55:08.526347   23138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 04:55:08.529940   23138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 04:55:08.533775   23138 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 04:55:08.533813   23138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 04:55:08.537237   23138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:55:08.540651   23138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 04:55:08.543554   23138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:55:08.546689   23138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 04:55:08.550195   23138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 04:55:08.553895   23138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 04:55:08.557401   23138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 04:55:08.560417   23138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 04:55:08.563158   23138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 04:55:08.566457   23138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:55:08.645110   23138 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 04:55:08.652568   23138 start.go:495] detecting cgroup driver to use...
	I0729 04:55:08.652658   23138 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 04:55:08.659109   23138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:55:08.664831   23138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 04:55:08.672353   23138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:55:08.677076   23138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 04:55:08.681677   23138 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 04:55:08.726471   23138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 04:55:08.731729   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:55:08.736877   23138 ssh_runner.go:195] Run: which cri-dockerd
	I0729 04:55:08.738081   23138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 04:55:08.741442   23138 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 04:55:08.746869   23138 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 04:55:08.823469   23138 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 04:55:08.908680   23138 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 04:55:08.908750   23138 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 04:55:08.914428   23138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:55:09.003233   23138 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:55:10.153896   23138 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.150673375s)
	I0729 04:55:10.153960   23138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 04:55:10.158943   23138 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 04:55:10.166577   23138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:55:10.171894   23138 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 04:55:10.251003   23138 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 04:55:10.332554   23138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:55:10.415902   23138 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 04:55:10.421847   23138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:55:10.426881   23138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:55:10.504352   23138 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 04:55:10.543632   23138 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 04:55:10.543708   23138 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 04:55:10.546846   23138 start.go:563] Will wait 60s for crictl version
	I0729 04:55:10.546894   23138 ssh_runner.go:195] Run: which crictl
	I0729 04:55:10.548293   23138 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 04:55:10.562579   23138 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 04:55:10.562654   23138 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:55:10.578388   23138 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:55:10.599332   23138 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 04:55:10.599463   23138 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 04:55:10.600798   23138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 04:55:10.604670   23138 kubeadm.go:883] updating cluster {Name:stopped-upgrade-370000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54107 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-370000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 04:55:10.604715   23138 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:55:10.604758   23138 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:55:10.616127   23138 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:55:10.616135   23138 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:55:10.616181   23138 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:55:10.619017   23138 ssh_runner.go:195] Run: which lz4
	I0729 04:55:10.620293   23138 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 04:55:10.621531   23138 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 04:55:10.621542   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 04:55:11.531826   23138 docker.go:649] duration metric: took 911.581917ms to copy over tarball
	I0729 04:55:11.531888   23138 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 04:55:12.729967   23138 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.198093583s)
	I0729 04:55:12.729982   23138 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 04:55:12.745701   23138 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:55:12.749218   23138 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 04:55:12.754629   23138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:55:12.836494   23138 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:55:14.523670   23138 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.687198125s)
	I0729 04:55:14.523785   23138 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:55:14.537293   23138 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:55:14.537302   23138 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:55:14.537308   23138 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 04:55:14.541583   23138 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:55:14.543109   23138 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:55:14.545343   23138 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:55:14.545383   23138 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:55:14.548125   23138 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:55:14.548236   23138 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:55:14.550273   23138 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:55:14.550528   23138 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:55:14.552290   23138 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:55:14.552692   23138 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:55:14.554091   23138 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:55:14.554145   23138 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:55:14.555824   23138 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 04:55:14.556113   23138 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:55:14.566960   23138 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:55:14.568359   23138 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 04:55:14.961078   23138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:55:14.971652   23138 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 04:55:14.971690   23138 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:55:14.971743   23138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:55:14.982120   23138 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 04:55:14.982824   23138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:55:14.987638   23138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0729 04:55:14.996014   23138 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 04:55:14.996150   23138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:55:14.996581   23138 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 04:55:14.996604   23138 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:55:14.996626   23138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:55:15.001562   23138 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 04:55:15.001587   23138 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:55:15.001635   23138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 04:55:15.006693   23138 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 04:55:15.006715   23138 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:55:15.006765   23138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:55:15.009618   23138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:55:15.016786   23138 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 04:55:15.029106   23138 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 04:55:15.029117   23138 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 04:55:15.029225   23138 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:55:15.029225   23138 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 04:55:15.031615   23138 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 04:55:15.031639   23138 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:55:15.031682   23138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:55:15.032065   23138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 04:55:15.032991   23138 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 04:55:15.033004   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 04:55:15.033036   23138 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0729 04:55:15.033046   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0729 04:55:15.035166   23138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:55:15.066520   23138 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 04:55:15.066540   23138 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 04:55:15.066608   23138 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 04:55:15.066661   23138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 04:55:15.084970   23138 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 04:55:15.084991   23138 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:55:15.085054   23138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:55:15.116609   23138 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 04:55:15.116728   23138 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 04:55:15.137332   23138 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:55:15.137382   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 04:55:15.152122   23138 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 04:55:15.152145   23138 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 04:55:15.152180   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 04:55:15.243163   23138 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 04:55:15.243203   23138 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 04:55:15.243212   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0729 04:55:15.291597   23138 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 04:55:15.291710   23138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:55:15.329076   23138 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 04:55:15.343167   23138 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 04:55:15.343216   23138 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:55:15.343331   23138 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:55:15.385618   23138 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 04:55:15.385886   23138 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:55:15.396699   23138 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 04:55:15.396732   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 04:55:15.425701   23138 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 04:55:15.425729   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0729 04:55:15.594315   23138 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 04:55:15.594340   23138 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:55:15.594346   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 04:55:15.826146   23138 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 04:55:15.826184   23138 cache_images.go:92] duration metric: took 1.288898292s to LoadCachedImages
	W0729 04:55:15.826226   23138 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0729 04:55:15.826233   23138 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 04:55:15.826299   23138 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-370000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-370000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 04:55:15.826369   23138 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 04:55:15.840304   23138 cni.go:84] Creating CNI manager for ""
	I0729 04:55:15.840314   23138 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:55:15.840319   23138 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 04:55:15.840327   23138 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-370000 NodeName:stopped-upgrade-370000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 04:55:15.840385   23138 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-370000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 04:55:15.840439   23138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 04:55:15.843233   23138 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 04:55:15.843262   23138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 04:55:15.846337   23138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 04:55:15.851079   23138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 04:55:15.855639   23138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 04:55:15.861295   23138 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 04:55:15.862487   23138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 04:55:15.866100   23138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:55:15.948783   23138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:55:15.955829   23138 certs.go:68] Setting up /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000 for IP: 10.0.2.15
	I0729 04:55:15.955840   23138 certs.go:194] generating shared ca certs ...
	I0729 04:55:15.955849   23138 certs.go:226] acquiring lock for ca certs: {Name:mkd0b73609ecd85c52105a2a4e4113a2c11cb5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:55:15.956102   23138 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.key
	I0729 04:55:15.956155   23138 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/proxy-client-ca.key
	I0729 04:55:15.956161   23138 certs.go:256] generating profile certs ...
	I0729 04:55:15.956222   23138 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/client.key
	I0729 04:55:15.956234   23138 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/apiserver.key.0fd819db
	I0729 04:55:15.956244   23138 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/apiserver.crt.0fd819db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 04:55:16.081548   23138 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/apiserver.crt.0fd819db ...
	I0729 04:55:16.081561   23138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/apiserver.crt.0fd819db: {Name:mk436e0166c66b3f37e2eefb89cac74032988b8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:55:16.081838   23138 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/apiserver.key.0fd819db ...
	I0729 04:55:16.081843   23138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/apiserver.key.0fd819db: {Name:mk93e47f33352c50108c1cd6b076ed4e68e46ee6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:55:16.081962   23138 certs.go:381] copying /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/apiserver.crt.0fd819db -> /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/apiserver.crt
	I0729 04:55:16.082123   23138 certs.go:385] copying /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/apiserver.key.0fd819db -> /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/apiserver.key
	I0729 04:55:16.082274   23138 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/proxy-client.key
	I0729 04:55:16.082400   23138 certs.go:484] found cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/21508.pem (1338 bytes)
	W0729 04:55:16.082422   23138 certs.go:480] ignoring /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/21508_empty.pem, impossibly tiny 0 bytes
	I0729 04:55:16.082427   23138 certs.go:484] found cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 04:55:16.082446   23138 certs.go:484] found cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem (1078 bytes)
	I0729 04:55:16.082464   23138 certs.go:484] found cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem (1123 bytes)
	I0729 04:55:16.082482   23138 certs.go:484] found cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/key.pem (1679 bytes)
	I0729 04:55:16.082520   23138 certs.go:484] found cert: /Users/jenkins/minikube-integration/19338-21024/.minikube/files/etc/ssl/certs/215082.pem (1708 bytes)
	I0729 04:55:16.082847   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 04:55:16.090032   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 04:55:16.097578   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 04:55:16.105158   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 04:55:16.112426   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 04:55:16.118742   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 04:55:16.125810   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 04:55:16.133267   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 04:55:16.140451   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/files/etc/ssl/certs/215082.pem --> /usr/share/ca-certificates/215082.pem (1708 bytes)
	I0729 04:55:16.147243   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 04:55:16.154152   23138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/21508.pem --> /usr/share/ca-certificates/21508.pem (1338 bytes)
	I0729 04:55:16.161263   23138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 04:55:16.166568   23138 ssh_runner.go:195] Run: openssl version
	I0729 04:55:16.168522   23138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 04:55:16.171223   23138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:55:16.172637   23138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 11:54 /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:55:16.172656   23138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:55:16.174394   23138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 04:55:16.177694   23138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21508.pem && ln -fs /usr/share/ca-certificates/21508.pem /etc/ssl/certs/21508.pem"
	I0729 04:55:16.180722   23138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21508.pem
	I0729 04:55:16.182103   23138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:43 /usr/share/ca-certificates/21508.pem
	I0729 04:55:16.182145   23138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21508.pem
	I0729 04:55:16.183954   23138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21508.pem /etc/ssl/certs/51391683.0"
	I0729 04:55:16.186637   23138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215082.pem && ln -fs /usr/share/ca-certificates/215082.pem /etc/ssl/certs/215082.pem"
	I0729 04:55:16.190161   23138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215082.pem
	I0729 04:55:16.191862   23138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:43 /usr/share/ca-certificates/215082.pem
	I0729 04:55:16.191884   23138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215082.pem
	I0729 04:55:16.193718   23138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215082.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 04:55:16.197210   23138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 04:55:16.198830   23138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 04:55:16.201180   23138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 04:55:16.203113   23138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 04:55:16.205095   23138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 04:55:16.206766   23138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 04:55:16.208555   23138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 04:55:16.210508   23138 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-370000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54107 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-370000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:55:16.210596   23138 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:55:16.220867   23138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 04:55:16.223946   23138 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 04:55:16.223954   23138 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 04:55:16.223979   23138 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 04:55:16.226709   23138 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:55:16.226751   23138 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-370000" does not appear in /Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:55:16.226768   23138 kubeconfig.go:62] /Users/jenkins/minikube-integration/19338-21024/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-370000" cluster setting kubeconfig missing "stopped-upgrade-370000" context setting]
	I0729 04:55:16.226935   23138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/kubeconfig: {Name:mkedcfdd12fb07fdee08d71279d618976d6521b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:55:16.227514   23138 kapi.go:59] client config for stopped-upgrade-370000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/client.key", CAFile:"/Users/jenkins/minikube-integration/19338-21024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a38080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:55:16.228351   23138 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 04:55:16.230931   23138 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-370000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 04:55:16.230937   23138 kubeadm.go:1160] stopping kube-system containers ...
	I0729 04:55:16.230974   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:55:16.241400   23138 docker.go:483] Stopping containers: [63c6a22e7d69 359d1ecd0e9f bac68c7fee7b da961fc6ef77 878b32ed0dbf 8706770be2f3 6882ba5fbf3e 354d880b4d90]
	I0729 04:55:16.241468   23138 ssh_runner.go:195] Run: docker stop 63c6a22e7d69 359d1ecd0e9f bac68c7fee7b da961fc6ef77 878b32ed0dbf 8706770be2f3 6882ba5fbf3e 354d880b4d90
	I0729 04:55:16.252257   23138 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 04:55:16.257501   23138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:55:16.260365   23138 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:55:16.260371   23138 kubeadm.go:157] found existing configuration files:
	
	I0729 04:55:16.260397   23138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/admin.conf
	I0729 04:55:16.262681   23138 kubeadm.go:163] "https://control-plane.minikube.internal:54107" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:55:16.262707   23138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:55:16.265566   23138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/kubelet.conf
	I0729 04:55:16.268138   23138 kubeadm.go:163] "https://control-plane.minikube.internal:54107" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:55:16.268160   23138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:55:16.270678   23138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/controller-manager.conf
	I0729 04:55:16.273538   23138 kubeadm.go:163] "https://control-plane.minikube.internal:54107" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:55:16.273558   23138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:55:16.276211   23138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/scheduler.conf
	I0729 04:55:16.278607   23138 kubeadm.go:163] "https://control-plane.minikube.internal:54107" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:55:16.278626   23138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:55:16.281402   23138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:55:16.284173   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:55:16.305481   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:55:16.809534   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:55:16.939850   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:55:16.972250   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:55:16.992635   23138 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:55:16.992726   23138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:55:17.493827   23138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:55:17.994780   23138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:55:17.999483   23138 api_server.go:72] duration metric: took 1.006873292s to wait for apiserver process to appear ...
	I0729 04:55:17.999493   23138 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:55:17.999504   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:55:23.000930   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:55:23.000974   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:55:28.001440   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:55:28.001524   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:55:33.002213   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:55:33.002238   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:55:38.002645   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:55:38.002674   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:55:43.003131   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:55:43.003154   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:55:48.003823   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:55:48.003885   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:55:53.004865   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:55:53.004921   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:55:58.006359   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:55:58.006406   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:03.008226   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:03.008278   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:08.010516   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:08.010554   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:13.012724   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:13.012765   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:18.014940   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:18.015392   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:56:18.048719   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:56:18.048884   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:56:18.068792   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:56:18.068907   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:56:18.084583   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:56:18.084653   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:56:18.096887   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:56:18.096967   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:56:18.108276   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:56:18.108358   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:56:18.119072   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:56:18.119149   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:56:18.134369   23138 logs.go:276] 0 containers: []
	W0729 04:56:18.134381   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:56:18.134440   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:56:18.145135   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:56:18.145153   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:56:18.145162   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:56:18.157762   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:56:18.157773   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:56:18.175884   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:56:18.175896   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:56:18.201800   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:56:18.201810   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:56:18.216837   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:56:18.216850   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:56:18.259010   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:56:18.259022   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:56:18.277095   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:56:18.277108   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:56:18.289053   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:56:18.289064   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:56:18.304716   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:56:18.304727   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:56:18.316490   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:56:18.316500   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:56:18.328143   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:56:18.328158   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:56:18.341347   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:56:18.341359   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:56:18.355719   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:56:18.355731   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:56:18.367528   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:56:18.367541   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:56:18.383211   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:56:18.383224   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:56:18.421594   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:56:18.421603   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:56:18.529931   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:56:18.529943   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:56:21.036112   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:26.037039   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:26.037495   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:56:26.075844   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:56:26.075990   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:56:26.097731   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:56:26.097840   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:56:26.117358   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:56:26.117444   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:56:26.129144   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:56:26.129216   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:56:26.141550   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:56:26.141619   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:56:26.152072   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:56:26.152130   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:56:26.166198   23138 logs.go:276] 0 containers: []
	W0729 04:56:26.166210   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:56:26.166264   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:56:26.176788   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:56:26.176806   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:56:26.176812   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:56:26.188582   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:56:26.188596   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:56:26.206273   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:56:26.206287   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:56:26.221699   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:56:26.221712   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:56:26.233525   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:56:26.233538   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:56:26.247436   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:56:26.247449   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:56:26.259201   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:56:26.259212   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:56:26.296972   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:56:26.296981   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:56:26.300916   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:56:26.300922   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:56:26.335927   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:56:26.335939   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:56:26.347605   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:56:26.347616   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:56:26.359218   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:56:26.359229   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:56:26.373289   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:56:26.373299   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:56:26.386738   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:56:26.386749   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:56:26.398310   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:56:26.398324   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:56:26.423664   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:56:26.423670   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:56:26.465735   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:56:26.465745   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:56:28.986489   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:33.988743   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:33.988963   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:56:34.020215   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:56:34.020333   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:56:34.035790   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:56:34.035878   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:56:34.048173   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:56:34.048244   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:56:34.059266   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:56:34.059338   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:56:34.069572   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:56:34.069644   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:56:34.080358   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:56:34.080453   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:56:34.090643   23138 logs.go:276] 0 containers: []
	W0729 04:56:34.090654   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:56:34.090713   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:56:34.101247   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:56:34.101266   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:56:34.101271   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:56:34.138009   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:56:34.138023   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:56:34.152884   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:56:34.152893   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:56:34.167232   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:56:34.167244   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:56:34.178680   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:56:34.178690   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:56:34.190056   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:56:34.190068   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:56:34.201833   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:56:34.201842   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:56:34.219466   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:56:34.219475   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:56:34.233550   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:56:34.233561   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:56:34.272728   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:56:34.272735   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:56:34.297969   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:56:34.297977   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:56:34.312378   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:56:34.312388   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:56:34.324167   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:56:34.324178   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:56:34.328246   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:56:34.328254   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:56:34.342967   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:56:34.342977   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:56:34.354482   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:56:34.354493   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:56:34.366290   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:56:34.366301   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:56:36.905928   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:41.908050   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:41.908170   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:56:41.922483   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:56:41.922564   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:56:41.935458   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:56:41.935530   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:56:41.945967   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:56:41.946032   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:56:41.960334   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:56:41.960408   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:56:41.970732   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:56:41.970805   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:56:41.981207   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:56:41.981273   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:56:41.991119   23138 logs.go:276] 0 containers: []
	W0729 04:56:41.991134   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:56:41.991190   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:56:42.002521   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:56:42.002539   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:56:42.002543   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:56:42.027931   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:56:42.027941   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:56:42.041974   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:56:42.041985   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:56:42.053327   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:56:42.053341   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:56:42.070888   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:56:42.070902   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:56:42.084624   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:56:42.084636   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:56:42.098435   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:56:42.098447   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:56:42.136552   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:56:42.136574   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:56:42.140941   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:56:42.140949   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:56:42.175771   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:56:42.175785   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:56:42.190006   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:56:42.190018   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:56:42.202107   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:56:42.202119   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:56:42.214142   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:56:42.214155   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:56:42.251697   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:56:42.251711   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:56:42.266182   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:56:42.266198   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:56:42.278599   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:56:42.278612   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:56:42.294452   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:56:42.294463   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:56:44.808619   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:49.810901   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:49.811187   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:56:49.839543   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:56:49.839665   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:56:49.857829   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:56:49.857907   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:56:49.871340   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:56:49.871411   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:56:49.888799   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:56:49.888884   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:56:49.904795   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:56:49.904871   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:56:49.916028   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:56:49.916095   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:56:49.926264   23138 logs.go:276] 0 containers: []
	W0729 04:56:49.926277   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:56:49.926338   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:56:49.936699   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:56:49.936718   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:56:49.936724   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:56:49.949019   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:56:49.949033   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:56:49.970919   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:56:49.970930   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:56:49.982669   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:56:49.982682   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:56:49.994517   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:56:49.994531   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:56:50.011905   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:56:50.011916   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:56:50.026178   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:56:50.026189   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:56:50.031642   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:56:50.031650   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:56:50.048944   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:56:50.048955   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:56:50.061178   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:56:50.061190   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:56:50.080258   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:56:50.080270   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:56:50.104555   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:56:50.104568   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:56:50.148281   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:56:50.148295   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:56:50.160621   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:56:50.160635   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:56:50.200515   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:56:50.200528   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:56:50.214109   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:56:50.214122   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:56:50.251486   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:56:50.251499   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:56:52.767580   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:56:57.769923   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:56:57.770401   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:56:57.805988   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:56:57.806128   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:56:57.829935   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:56:57.830033   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:56:57.844777   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:56:57.844856   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:56:57.861338   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:56:57.861409   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:56:57.873191   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:56:57.873256   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:56:57.884250   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:56:57.884316   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:56:57.894197   23138 logs.go:276] 0 containers: []
	W0729 04:56:57.894209   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:56:57.894257   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:56:57.904673   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:56:57.904692   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:56:57.904697   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:56:57.943540   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:56:57.943550   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:56:57.977368   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:56:57.977380   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:56:57.989392   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:56:57.989403   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:56:58.007010   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:56:58.007018   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:56:58.018394   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:56:58.018404   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:56:58.030040   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:56:58.030050   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:56:58.053917   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:56:58.053925   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:56:58.066512   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:56:58.066523   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:56:58.080586   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:56:58.080597   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:56:58.119216   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:56:58.119230   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:56:58.132813   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:56:58.132829   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:56:58.147047   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:56:58.147056   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:56:58.158689   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:56:58.158698   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:56:58.173018   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:56:58.173031   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:56:58.177562   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:56:58.177572   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:56:58.189189   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:56:58.189200   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:57:00.704438   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:57:05.707173   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:57:05.707632   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:57:05.743815   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:57:05.743961   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:57:05.764222   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:57:05.764323   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:57:05.779621   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:57:05.779697   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:57:05.792171   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:57:05.792246   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:57:05.803338   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:57:05.803404   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:57:05.816873   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:57:05.816942   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:57:05.831627   23138 logs.go:276] 0 containers: []
	W0729 04:57:05.831641   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:57:05.831698   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:57:05.843035   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:57:05.843053   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:57:05.843058   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:57:05.857993   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:57:05.858005   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:57:05.870250   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:57:05.870260   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:57:05.887937   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:57:05.887948   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:57:05.913355   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:57:05.913373   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:57:05.928389   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:57:05.928403   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:57:05.932932   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:57:05.932941   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:57:05.946747   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:57:05.946758   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:57:05.964314   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:57:05.964329   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:57:05.979264   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:57:05.979275   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:57:05.992848   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:57:05.992859   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:57:06.004135   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:57:06.004146   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:57:06.018020   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:57:06.018032   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:57:06.029750   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:57:06.029761   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:57:06.069023   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:57:06.069033   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:57:06.106353   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:57:06.106368   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:57:06.144870   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:57:06.144882   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:57:08.659839   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:57:13.662472   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:57:13.662779   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:57:13.700172   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:57:13.700278   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:57:13.717074   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:57:13.717159   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:57:13.730068   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:57:13.730135   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:57:13.741856   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:57:13.741931   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:57:13.760886   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:57:13.760958   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:57:13.773095   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:57:13.773165   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:57:13.784071   23138 logs.go:276] 0 containers: []
	W0729 04:57:13.784084   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:57:13.784146   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:57:13.794975   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:57:13.794991   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:57:13.794997   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:57:13.834995   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:57:13.835010   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:57:13.849182   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:57:13.849196   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:57:13.864766   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:57:13.864780   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:57:13.889493   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:57:13.889502   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:57:13.927757   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:57:13.927765   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:57:13.943527   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:57:13.943538   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:57:13.955802   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:57:13.955813   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:57:13.971054   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:57:13.971068   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:57:13.985024   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:57:13.985036   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:57:13.996980   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:57:13.996996   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:57:14.014898   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:57:14.014908   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:57:14.026226   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:57:14.026239   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:57:14.040961   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:57:14.040976   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:57:14.052574   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:57:14.052587   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:57:14.087645   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:57:14.087655   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:57:14.103456   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:57:14.103464   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:57:16.609500   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:57:21.611973   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:57:21.612179   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:57:21.626973   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:57:21.627060   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:57:21.638774   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:57:21.638863   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:57:21.651557   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:57:21.651630   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:57:21.662570   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:57:21.662643   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:57:21.673021   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:57:21.673091   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:57:21.683449   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:57:21.683518   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:57:21.695579   23138 logs.go:276] 0 containers: []
	W0729 04:57:21.695591   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:57:21.695649   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:57:21.706253   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:57:21.706272   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:57:21.706278   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:57:21.717507   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:57:21.717519   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:57:21.728900   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:57:21.728911   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:57:21.741325   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:57:21.741342   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:57:21.780664   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:57:21.780672   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:57:21.795304   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:57:21.795315   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:57:21.806816   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:57:21.806827   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:57:21.834070   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:57:21.834082   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:57:21.845527   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:57:21.845539   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:57:21.883800   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:57:21.883812   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:57:21.897484   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:57:21.897495   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:57:21.914461   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:57:21.914471   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:57:21.929166   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:57:21.929176   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:57:21.946487   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:57:21.946497   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:57:21.960414   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:57:21.960426   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:57:21.976863   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:57:21.976873   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:57:21.981457   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:57:21.981466   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:57:24.521311   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:57:29.527498   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:57:29.527824   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:57:29.559039   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:57:29.559152   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:57:29.577561   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:57:29.577644   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:57:29.591287   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:57:29.591354   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:57:29.612071   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:57:29.612138   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:57:29.622900   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:57:29.622975   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:57:29.633206   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:57:29.633274   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:57:29.643565   23138 logs.go:276] 0 containers: []
	W0729 04:57:29.643577   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:57:29.643632   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:57:29.654133   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:57:29.654153   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:57:29.654158   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:57:29.665395   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:57:29.665407   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:57:29.689721   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:57:29.689729   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:57:29.703849   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:57:29.703864   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:57:29.722414   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:57:29.722425   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:57:29.733297   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:57:29.733310   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:57:29.744848   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:57:29.744861   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:57:29.756581   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:57:29.756596   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:57:29.794465   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:57:29.794475   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:57:29.843354   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:57:29.843366   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:57:29.858386   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:57:29.858401   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:57:29.871994   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:57:29.872009   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:57:29.908644   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:57:29.908660   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:57:29.924213   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:57:29.924227   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:57:29.936162   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:57:29.936178   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:57:29.948064   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:57:29.948075   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:57:29.952442   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:57:29.952449   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:57:32.476639   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:57:37.482523   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:57:37.482673   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:57:37.498728   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:57:37.498802   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:57:37.510755   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:57:37.510824   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:57:37.521763   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:57:37.521837   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:57:37.532622   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:57:37.532698   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:57:37.542824   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:57:37.542893   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:57:37.552877   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:57:37.552951   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:57:37.563246   23138 logs.go:276] 0 containers: []
	W0729 04:57:37.563258   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:57:37.563320   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:57:37.575626   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:57:37.575644   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:57:37.575650   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:57:37.614342   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:57:37.614360   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:57:37.618770   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:57:37.618777   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:57:37.633062   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:57:37.633071   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:57:37.670979   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:57:37.670990   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:57:37.683649   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:57:37.683662   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:57:37.698689   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:57:37.698699   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:57:37.712331   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:57:37.712340   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:57:37.725566   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:57:37.725578   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:57:37.738017   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:57:37.738029   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:57:37.749757   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:57:37.749770   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:57:37.763512   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:57:37.763522   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:57:37.775396   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:57:37.775408   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:57:37.800038   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:57:37.800045   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:57:37.837674   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:57:37.837684   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:57:37.852722   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:57:37.852732   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:57:37.870587   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:57:37.870597   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:57:40.386730   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:57:45.389634   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:57:45.389828   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:57:45.408524   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:57:45.408606   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:57:45.424468   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:57:45.424540   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:57:45.435468   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:57:45.435535   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:57:45.445744   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:57:45.445817   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:57:45.455839   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:57:45.455904   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:57:45.466584   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:57:45.466657   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:57:45.476485   23138 logs.go:276] 0 containers: []
	W0729 04:57:45.476496   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:57:45.476555   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:57:45.486941   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:57:45.486957   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:57:45.486962   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:57:45.509947   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:57:45.509957   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:57:45.522610   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:57:45.522623   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:57:45.526577   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:57:45.526585   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:57:45.541438   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:57:45.541449   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:57:45.556628   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:57:45.556645   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:57:45.570120   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:57:45.570133   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:57:45.607778   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:57:45.607791   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:57:45.619081   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:57:45.619092   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:57:45.636034   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:57:45.636044   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:57:45.672504   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:57:45.672511   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:57:45.706666   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:57:45.706677   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:57:45.723812   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:57:45.723821   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:57:45.735523   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:57:45.735534   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:57:45.749930   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:57:45.749942   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:57:45.765107   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:57:45.765118   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:57:45.778677   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:57:45.778688   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:57:48.299540   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:57:53.303182   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:57:53.303372   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:57:53.327488   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:57:53.327563   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:57:53.340280   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:57:53.340358   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:57:53.350639   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:57:53.350710   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:57:53.361448   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:57:53.361520   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:57:53.371865   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:57:53.371936   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:57:53.383532   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:57:53.383609   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:57:53.394030   23138 logs.go:276] 0 containers: []
	W0729 04:57:53.394041   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:57:53.394099   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:57:53.405034   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:57:53.405051   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:57:53.405056   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:57:53.416854   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:57:53.416867   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:57:53.428261   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:57:53.428274   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:57:53.464617   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:57:53.464629   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:57:53.501441   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:57:53.501453   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:57:53.525938   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:57:53.525947   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:57:53.564020   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:57:53.564031   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:57:53.575521   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:57:53.575532   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:57:53.590984   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:57:53.591000   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:57:53.612681   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:57:53.612691   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:57:53.623820   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:57:53.623832   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:57:53.635361   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:57:53.635375   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:57:53.639928   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:57:53.639938   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:57:53.654108   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:57:53.654124   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:57:53.667461   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:57:53.667471   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:57:53.685569   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:57:53.685581   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:57:53.696743   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:57:53.696756   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:57:56.212662   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:58:01.215669   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:58:01.216091   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:58:01.252291   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:58:01.252438   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:58:01.273631   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:58:01.273750   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:58:01.294358   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:58:01.294428   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:58:01.306012   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:58:01.306088   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:58:01.318052   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:58:01.318123   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:58:01.329618   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:58:01.329686   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:58:01.340044   23138 logs.go:276] 0 containers: []
	W0729 04:58:01.340058   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:58:01.340132   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:58:01.350400   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:58:01.350417   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:58:01.350422   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:58:01.375572   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:58:01.375587   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:58:01.387328   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:58:01.387343   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:58:01.401193   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:58:01.401205   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:58:01.412851   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:58:01.412868   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:58:01.417336   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:58:01.417343   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:58:01.431019   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:58:01.431033   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:58:01.443060   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:58:01.443070   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:58:01.458471   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:58:01.458485   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:58:01.470045   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:58:01.470059   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:58:01.484405   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:58:01.484416   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:58:01.499076   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:58:01.499090   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:58:01.510899   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:58:01.510914   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:58:01.528156   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:58:01.528169   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:58:01.539642   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:58:01.539653   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:58:01.578178   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:58:01.578185   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:58:01.616091   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:58:01.616105   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:58:04.156414   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:58:09.159590   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:58:09.159929   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:58:09.196426   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:58:09.196559   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:58:09.217218   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:58:09.217325   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:58:09.231388   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:58:09.231471   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:58:09.243186   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:58:09.243247   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:58:09.253884   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:58:09.253948   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:58:09.264486   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:58:09.264550   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:58:09.274789   23138 logs.go:276] 0 containers: []
	W0729 04:58:09.274803   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:58:09.274866   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:58:09.285337   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:58:09.285354   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:58:09.285359   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:58:09.300430   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:58:09.300440   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:58:09.315534   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:58:09.315544   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:58:09.333341   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:58:09.333352   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:58:09.349192   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:58:09.349203   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:58:09.363932   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:58:09.363943   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:58:09.368640   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:58:09.368652   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:58:09.411134   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:58:09.411149   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:58:09.434204   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:58:09.434211   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:58:09.472966   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:58:09.472974   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:58:09.487357   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:58:09.487370   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:58:09.498598   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:58:09.498611   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:58:09.514085   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:58:09.514096   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:58:09.525907   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:58:09.525919   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:58:09.537467   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:58:09.537478   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:58:09.550970   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:58:09.550981   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:58:09.562915   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:58:09.562926   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:58:12.100945   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:58:17.103449   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:58:17.103593   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:58:17.116688   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:58:17.116768   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:58:17.127944   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:58:17.128014   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:58:17.138204   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:58:17.138274   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:58:17.148651   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:58:17.148722   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:58:17.165699   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:58:17.165763   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:58:17.179011   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:58:17.179084   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:58:17.189238   23138 logs.go:276] 0 containers: []
	W0729 04:58:17.189251   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:58:17.189305   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:58:17.199682   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:58:17.199699   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:58:17.199705   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:58:17.213839   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:58:17.213851   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:58:17.229134   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:58:17.229146   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:58:17.253086   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:58:17.253097   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:58:17.265092   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:58:17.265103   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:58:17.269755   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:58:17.269763   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:58:17.287576   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:58:17.287587   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:58:17.327600   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:58:17.327617   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:58:17.361946   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:58:17.361958   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:58:17.375962   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:58:17.375979   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:58:17.387735   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:58:17.387747   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:58:17.403994   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:58:17.404005   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:58:17.417819   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:58:17.417832   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:58:17.440472   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:58:17.440482   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:58:17.480688   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:58:17.480698   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:58:17.500097   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:58:17.500109   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:58:17.511391   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:58:17.511402   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:58:20.024206   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:58:25.026279   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:58:25.026549   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:58:25.052755   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:58:25.052864   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:58:25.070699   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:58:25.070776   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:58:25.084813   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:58:25.084890   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:58:25.096500   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:58:25.096576   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:58:25.107238   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:58:25.107307   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:58:25.117759   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:58:25.117823   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:58:25.128551   23138 logs.go:276] 0 containers: []
	W0729 04:58:25.128561   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:58:25.128615   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:58:25.138674   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:58:25.138692   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:58:25.138697   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:58:25.178282   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:58:25.178292   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:58:25.189903   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:58:25.189916   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:58:25.201640   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:58:25.201650   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:58:25.206321   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:58:25.206331   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:58:25.220168   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:58:25.220179   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:58:25.235085   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:58:25.235096   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:58:25.247279   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:58:25.247292   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:58:25.261113   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:58:25.261123   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:58:25.278642   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:58:25.278652   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:58:25.302606   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:58:25.302614   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:58:25.315259   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:58:25.315271   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:58:25.327016   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:58:25.327030   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:58:25.363720   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:58:25.363728   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:58:25.378266   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:58:25.378277   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:58:25.417800   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:58:25.417815   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:58:25.433008   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:58:25.433020   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:58:27.951298   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:58:32.953694   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:58:32.953902   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:58:32.974158   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:58:32.974248   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:58:32.987375   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:58:32.987454   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:58:32.998340   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:58:32.998397   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:58:33.016868   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:58:33.016945   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:58:33.027841   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:58:33.027915   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:58:33.040062   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:58:33.040135   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:58:33.049821   23138 logs.go:276] 0 containers: []
	W0729 04:58:33.049832   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:58:33.049886   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:58:33.064928   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:58:33.064946   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:58:33.064951   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:58:33.069374   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:58:33.069383   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:58:33.104037   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:58:33.104049   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:58:33.141868   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:58:33.141880   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:58:33.152927   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:58:33.152941   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:58:33.164811   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:58:33.164823   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:58:33.202175   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:58:33.202186   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:58:33.216567   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:58:33.216580   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:58:33.230682   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:58:33.230695   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:58:33.242124   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:58:33.242136   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:58:33.257125   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:58:33.257137   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:58:33.268549   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:58:33.268561   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:58:33.291014   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:58:33.291023   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:58:33.302974   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:58:33.302985   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:58:33.320995   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:58:33.321010   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:58:33.334998   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:58:33.335009   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:58:33.352320   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:58:33.352329   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:58:35.867429   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:58:40.869607   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:58:40.869730   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:58:40.881548   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:58:40.881627   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:58:40.892020   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:58:40.892091   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:58:40.902822   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:58:40.902893   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:58:40.913526   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:58:40.913600   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:58:40.928183   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:58:40.928242   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:58:40.938855   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:58:40.938916   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:58:40.949142   23138 logs.go:276] 0 containers: []
	W0729 04:58:40.949154   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:58:40.949221   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:58:40.959875   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:58:40.959894   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:58:40.959899   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:58:40.974473   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:58:40.974485   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:58:40.985858   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:58:40.985871   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:58:41.009774   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:58:41.009784   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:58:41.048759   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:58:41.048768   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:58:41.062051   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:58:41.062061   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:58:41.076099   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:58:41.076108   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:58:41.087852   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:58:41.087864   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:58:41.126451   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:58:41.126461   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:58:41.138125   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:58:41.138142   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:58:41.149759   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:58:41.149770   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:58:41.154403   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:58:41.154413   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:58:41.196970   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:58:41.196982   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:58:41.210713   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:58:41.210722   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:58:41.222791   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:58:41.222801   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:58:41.234488   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:58:41.234500   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:58:41.248422   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:58:41.248433   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:58:43.768334   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:58:48.771138   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:58:48.771532   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:58:48.803242   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:58:48.803364   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:58:48.822839   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:58:48.822935   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:58:48.840693   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:58:48.840771   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:58:48.852769   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:58:48.852842   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:58:48.865223   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:58:48.865295   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:58:48.880051   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:58:48.880130   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:58:48.890346   23138 logs.go:276] 0 containers: []
	W0729 04:58:48.890357   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:58:48.890417   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:58:48.900881   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:58:48.900898   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:58:48.900903   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:58:48.912596   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:58:48.912606   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:58:48.950978   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:58:48.950985   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:58:48.985612   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:58:48.985623   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:58:49.004312   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:58:49.004323   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:58:49.018433   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:58:49.018445   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:58:49.032577   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:58:49.032588   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:58:49.047057   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:58:49.047072   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:58:49.085601   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:58:49.085611   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:58:49.097745   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:58:49.097761   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:58:49.112130   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:58:49.112141   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:58:49.116155   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:58:49.116161   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:58:49.130476   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:58:49.130485   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:58:49.142144   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:58:49.142155   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:58:49.153519   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:58:49.153529   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:58:49.171128   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:58:49.171138   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:58:49.193974   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:58:49.193981   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:58:51.708817   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:58:56.711362   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:58:56.711563   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:58:56.729381   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:58:56.729471   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:58:56.740454   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:58:56.740531   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:58:56.751010   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:58:56.751079   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:58:56.761406   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:58:56.761472   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:58:56.771610   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:58:56.771679   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:58:56.783206   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:58:56.783275   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:58:56.794381   23138 logs.go:276] 0 containers: []
	W0729 04:58:56.794397   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:58:56.794460   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:58:56.806006   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:58:56.806023   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:58:56.806028   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:58:56.820171   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:58:56.820183   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:58:56.836159   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:58:56.836173   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:58:56.848183   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:58:56.848197   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:58:56.882144   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:58:56.882154   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:58:56.900305   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:58:56.900314   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:58:56.939274   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:58:56.939288   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:58:56.950933   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:58:56.950944   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:58:56.964524   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:58:56.964535   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:58:56.976575   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:58:56.976589   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:58:56.980843   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:58:56.980849   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:58:56.991856   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:58:56.991870   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:58:57.009721   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:58:57.009737   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:58:57.021422   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:58:57.021432   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:58:57.032409   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:58:57.032424   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:58:57.071397   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:58:57.071407   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:58:57.085803   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:58:57.085816   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:58:59.610241   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:59:04.612889   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:59:04.613150   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:59:04.638291   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:59:04.638414   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:59:04.655429   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:59:04.655519   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:59:04.674259   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:59:04.674327   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:59:04.685418   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:59:04.685480   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:59:04.695876   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:59:04.695934   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:59:04.712170   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:59:04.712239   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:59:04.722225   23138 logs.go:276] 0 containers: []
	W0729 04:59:04.722240   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:59:04.722290   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:59:04.732589   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:59:04.732605   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:59:04.732610   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:59:04.746686   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:59:04.746699   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:59:04.783748   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:59:04.783758   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:59:04.797661   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:59:04.797672   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:59:04.812346   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:59:04.812357   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:59:04.837069   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:59:04.837095   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:59:04.890891   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:59:04.890902   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:59:04.905595   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:59:04.905608   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:59:04.917597   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:59:04.917610   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:59:04.929925   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:59:04.929937   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:59:04.941029   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:59:04.941040   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:59:04.981228   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:59:04.981238   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:59:04.996212   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:59:04.996224   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:59:05.013798   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:59:05.013810   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:59:05.018041   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:59:05.018048   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:59:05.033209   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:59:05.033220   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:59:05.044648   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:59:05.044659   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:59:07.563829   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:59:12.565905   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:59:12.566072   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:59:12.576945   23138 logs.go:276] 2 containers: [ee5c01944397 878b32ed0dbf]
	I0729 04:59:12.577017   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:59:12.589158   23138 logs.go:276] 2 containers: [1a861f17ec83 da961fc6ef77]
	I0729 04:59:12.589228   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:59:12.599806   23138 logs.go:276] 1 containers: [1479cbb169fb]
	I0729 04:59:12.599871   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:59:12.610470   23138 logs.go:276] 2 containers: [ebc5d9d0b323 359d1ecd0e9f]
	I0729 04:59:12.610538   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:59:12.621107   23138 logs.go:276] 1 containers: [694dfb30ba5e]
	I0729 04:59:12.621177   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:59:12.632015   23138 logs.go:276] 2 containers: [dcc60c34ad52 63c6a22e7d69]
	I0729 04:59:12.632075   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:59:12.642162   23138 logs.go:276] 0 containers: []
	W0729 04:59:12.642178   23138 logs.go:278] No container was found matching "kindnet"
	I0729 04:59:12.642234   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:59:12.652954   23138 logs.go:276] 2 containers: [2509ea9d6a85 aebbfc027efe]
	I0729 04:59:12.652973   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 04:59:12.652979   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:59:12.690290   23138 logs.go:123] Gathering logs for storage-provisioner [2509ea9d6a85] ...
	I0729 04:59:12.690298   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2509ea9d6a85"
	I0729 04:59:12.701940   23138 logs.go:123] Gathering logs for etcd [1a861f17ec83] ...
	I0729 04:59:12.701950   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a861f17ec83"
	I0729 04:59:12.716053   23138 logs.go:123] Gathering logs for etcd [da961fc6ef77] ...
	I0729 04:59:12.716062   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da961fc6ef77"
	I0729 04:59:12.730451   23138 logs.go:123] Gathering logs for kube-controller-manager [dcc60c34ad52] ...
	I0729 04:59:12.730462   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc60c34ad52"
	I0729 04:59:12.749890   23138 logs.go:123] Gathering logs for Docker ...
	I0729 04:59:12.749901   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:59:12.771766   23138 logs.go:123] Gathering logs for kube-apiserver [878b32ed0dbf] ...
	I0729 04:59:12.771776   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878b32ed0dbf"
	I0729 04:59:12.809403   23138 logs.go:123] Gathering logs for coredns [1479cbb169fb] ...
	I0729 04:59:12.809414   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1479cbb169fb"
	I0729 04:59:12.821077   23138 logs.go:123] Gathering logs for kube-scheduler [ebc5d9d0b323] ...
	I0729 04:59:12.821091   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebc5d9d0b323"
	I0729 04:59:12.834919   23138 logs.go:123] Gathering logs for kube-controller-manager [63c6a22e7d69] ...
	I0729 04:59:12.834930   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c6a22e7d69"
	I0729 04:59:12.848749   23138 logs.go:123] Gathering logs for storage-provisioner [aebbfc027efe] ...
	I0729 04:59:12.848762   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aebbfc027efe"
	I0729 04:59:12.861342   23138 logs.go:123] Gathering logs for container status ...
	I0729 04:59:12.861353   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:59:12.873469   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 04:59:12.873486   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:59:12.878100   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:59:12.878107   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:59:12.912265   23138 logs.go:123] Gathering logs for kube-apiserver [ee5c01944397] ...
	I0729 04:59:12.912276   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee5c01944397"
	I0729 04:59:12.926864   23138 logs.go:123] Gathering logs for kube-scheduler [359d1ecd0e9f] ...
	I0729 04:59:12.926880   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359d1ecd0e9f"
	I0729 04:59:12.941810   23138 logs.go:123] Gathering logs for kube-proxy [694dfb30ba5e] ...
	I0729 04:59:12.941820   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 694dfb30ba5e"
	I0729 04:59:15.455404   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:59:20.457756   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:59:20.457879   23138 kubeadm.go:597] duration metric: took 4m4.217555416s to restartPrimaryControlPlane
	W0729 04:59:20.457974   23138 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 04:59:20.458020   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 04:59:21.495602   23138 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.037585458s)
	I0729 04:59:21.495672   23138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 04:59:21.500640   23138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:59:21.503271   23138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:59:21.506047   23138 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:59:21.506053   23138 kubeadm.go:157] found existing configuration files:
	
	I0729 04:59:21.506079   23138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/admin.conf
	I0729 04:59:21.508710   23138 kubeadm.go:163] "https://control-plane.minikube.internal:54107" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:59:21.508731   23138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:59:21.511128   23138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/kubelet.conf
	I0729 04:59:21.513883   23138 kubeadm.go:163] "https://control-plane.minikube.internal:54107" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:59:21.513905   23138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:59:21.517170   23138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/controller-manager.conf
	I0729 04:59:21.519811   23138 kubeadm.go:163] "https://control-plane.minikube.internal:54107" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:59:21.519833   23138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:59:21.522427   23138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/scheduler.conf
	I0729 04:59:21.525424   23138 kubeadm.go:163] "https://control-plane.minikube.internal:54107" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54107 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:59:21.525453   23138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:59:21.528345   23138 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 04:59:21.546180   23138 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 04:59:21.546209   23138 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 04:59:21.597238   23138 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 04:59:21.597331   23138 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 04:59:21.597424   23138 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 04:59:21.645150   23138 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 04:59:21.653299   23138 out.go:204]   - Generating certificates and keys ...
	I0729 04:59:21.653332   23138 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 04:59:21.653361   23138 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 04:59:21.653396   23138 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 04:59:21.653429   23138 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 04:59:21.653486   23138 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 04:59:21.653524   23138 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 04:59:21.653556   23138 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 04:59:21.653587   23138 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 04:59:21.653635   23138 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 04:59:21.653673   23138 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 04:59:21.653696   23138 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 04:59:21.653733   23138 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 04:59:21.815840   23138 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 04:59:21.906942   23138 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 04:59:21.957040   23138 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 04:59:22.062267   23138 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 04:59:22.094099   23138 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 04:59:22.094474   23138 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 04:59:22.094509   23138 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 04:59:22.176880   23138 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 04:59:22.185069   23138 out.go:204]   - Booting up control plane ...
	I0729 04:59:22.185120   23138 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 04:59:22.185179   23138 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 04:59:22.185220   23138 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 04:59:22.185272   23138 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 04:59:22.185358   23138 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 04:59:26.683930   23138 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504201 seconds
	I0729 04:59:26.684046   23138 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 04:59:26.688961   23138 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 04:59:27.195959   23138 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 04:59:27.196066   23138 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-370000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 04:59:27.701454   23138 kubeadm.go:310] [bootstrap-token] Using token: uyfey0.g9l5okzmd9i5x16z
	I0729 04:59:27.705139   23138 out.go:204]   - Configuring RBAC rules ...
	I0729 04:59:27.705213   23138 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 04:59:27.706763   23138 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 04:59:27.711204   23138 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 04:59:27.712303   23138 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 04:59:27.713573   23138 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 04:59:27.714637   23138 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 04:59:27.718611   23138 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 04:59:27.893630   23138 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 04:59:28.108153   23138 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 04:59:28.108571   23138 kubeadm.go:310] 
	I0729 04:59:28.108603   23138 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 04:59:28.108607   23138 kubeadm.go:310] 
	I0729 04:59:28.108645   23138 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 04:59:28.108649   23138 kubeadm.go:310] 
	I0729 04:59:28.108661   23138 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 04:59:28.108698   23138 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 04:59:28.108727   23138 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 04:59:28.108730   23138 kubeadm.go:310] 
	I0729 04:59:28.108764   23138 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 04:59:28.108773   23138 kubeadm.go:310] 
	I0729 04:59:28.108804   23138 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 04:59:28.108808   23138 kubeadm.go:310] 
	I0729 04:59:28.108837   23138 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 04:59:28.108881   23138 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 04:59:28.108921   23138 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 04:59:28.108925   23138 kubeadm.go:310] 
	I0729 04:59:28.108970   23138 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 04:59:28.109013   23138 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 04:59:28.109017   23138 kubeadm.go:310] 
	I0729 04:59:28.109059   23138 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uyfey0.g9l5okzmd9i5x16z \
	I0729 04:59:28.109112   23138 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:19abb723ab6eb994cd48198e215993e10e658d429ac48770fbcd96c8643368d2 \
	I0729 04:59:28.109125   23138 kubeadm.go:310] 	--control-plane 
	I0729 04:59:28.109128   23138 kubeadm.go:310] 
	I0729 04:59:28.109181   23138 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 04:59:28.109183   23138 kubeadm.go:310] 
	I0729 04:59:28.109228   23138 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uyfey0.g9l5okzmd9i5x16z \
	I0729 04:59:28.109284   23138 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:19abb723ab6eb994cd48198e215993e10e658d429ac48770fbcd96c8643368d2 
	I0729 04:59:28.109397   23138 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 04:59:28.109408   23138 cni.go:84] Creating CNI manager for ""
	I0729 04:59:28.109416   23138 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:59:28.114145   23138 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 04:59:28.124188   23138 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 04:59:28.127432   23138 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 04:59:28.131969   23138 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 04:59:28.132014   23138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 04:59:28.132026   23138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-370000 minikube.k8s.io/updated_at=2024_07_29T04_59_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=stopped-upgrade-370000 minikube.k8s.io/primary=true
	I0729 04:59:28.169829   23138 kubeadm.go:1113] duration metric: took 37.853625ms to wait for elevateKubeSystemPrivileges
	I0729 04:59:28.169902   23138 ops.go:34] apiserver oom_adj: -16
	I0729 04:59:28.169911   23138 kubeadm.go:394] duration metric: took 4m11.943180625s to StartCluster
	I0729 04:59:28.169921   23138 settings.go:142] acquiring lock: {Name:mkdb53fe54493beaa070cff365444ca7eaee0535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:59:28.170073   23138 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:59:28.170459   23138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/kubeconfig: {Name:mkedcfdd12fb07fdee08d71279d618976d6521b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:59:28.170652   23138 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:59:28.170673   23138 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 04:59:28.170711   23138 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-370000"
	I0729 04:59:28.170738   23138 config.go:182] Loaded profile config "stopped-upgrade-370000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:59:28.170741   23138 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-370000"
	I0729 04:59:28.170751   23138 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-370000"
	I0729 04:59:28.170793   23138 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-370000"
	W0729 04:59:28.170893   23138 addons.go:243] addon storage-provisioner should already be in state true
	I0729 04:59:28.170906   23138 host.go:66] Checking if "stopped-upgrade-370000" exists ...
	I0729 04:59:28.172018   23138 kapi.go:59] client config for stopped-upgrade-370000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/stopped-upgrade-370000/client.key", CAFile:"/Users/jenkins/minikube-integration/19338-21024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a38080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:59:28.172145   23138 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-370000"
	W0729 04:59:28.172154   23138 addons.go:243] addon default-storageclass should already be in state true
	I0729 04:59:28.172162   23138 host.go:66] Checking if "stopped-upgrade-370000" exists ...
	I0729 04:59:28.175145   23138 out.go:177] * Verifying Kubernetes components...
	I0729 04:59:28.175533   23138 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 04:59:28.179290   23138 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 04:59:28.179296   23138 sshutil.go:53] new ssh client: &{IP:localhost Port:54075 SSHKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/stopped-upgrade-370000/id_rsa Username:docker}
	I0729 04:59:28.183147   23138 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:59:28.187136   23138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:59:28.190190   23138 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:59:28.190196   23138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 04:59:28.190202   23138 sshutil.go:53] new ssh client: &{IP:localhost Port:54075 SSHKeyPath:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/stopped-upgrade-370000/id_rsa Username:docker}
	I0729 04:59:28.281861   23138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:59:28.287293   23138 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:59:28.287338   23138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:59:28.291437   23138 api_server.go:72] duration metric: took 120.77675ms to wait for apiserver process to appear ...
	I0729 04:59:28.291445   23138 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:59:28.291452   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:59:28.326596   23138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 04:59:28.346156   23138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:59:33.292196   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:59:33.292225   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:59:38.293363   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:59:38.293403   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:59:43.293870   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:59:43.293891   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:59:48.294168   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:59:48.294190   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:59:53.294558   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:59:53.294608   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:59:58.295359   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:59:58.295423   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 04:59:58.694489   23138 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 04:59:58.700069   23138 out.go:177] * Enabled addons: storage-provisioner
	I0729 04:59:58.707946   23138 addons.go:510] duration metric: took 30.537830583s for enable addons: enabled=[storage-provisioner]
	I0729 05:00:03.296235   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:03.296255   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:08.297210   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:08.297229   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:13.298703   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:13.298753   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:18.300136   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:18.300158   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:23.302207   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:23.302250   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:28.304442   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:28.304534   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:00:28.315823   23138 logs.go:276] 1 containers: [2334b557e5a6]
	I0729 05:00:28.315906   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:00:28.326508   23138 logs.go:276] 1 containers: [b4198e847f69]
	I0729 05:00:28.326579   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:00:28.337395   23138 logs.go:276] 2 containers: [6c10a0abec53 41dfa01e2ecc]
	I0729 05:00:28.337468   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:00:28.348667   23138 logs.go:276] 1 containers: [2da429e82c93]
	I0729 05:00:28.348741   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:00:28.361493   23138 logs.go:276] 1 containers: [0200e3b5446e]
	I0729 05:00:28.361574   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:00:28.372518   23138 logs.go:276] 1 containers: [91b11ed29fb4]
	I0729 05:00:28.372590   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:00:28.383373   23138 logs.go:276] 0 containers: []
	W0729 05:00:28.383387   23138 logs.go:278] No container was found matching "kindnet"
	I0729 05:00:28.383445   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:00:28.394504   23138 logs.go:276] 1 containers: [b62fcc7f4e8c]
	I0729 05:00:28.394521   23138 logs.go:123] Gathering logs for etcd [b4198e847f69] ...
	I0729 05:00:28.394528   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4198e847f69"
	I0729 05:00:28.409735   23138 logs.go:123] Gathering logs for coredns [41dfa01e2ecc] ...
	I0729 05:00:28.409746   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41dfa01e2ecc"
	I0729 05:00:28.422206   23138 logs.go:123] Gathering logs for storage-provisioner [b62fcc7f4e8c] ...
	I0729 05:00:28.422219   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fcc7f4e8c"
	I0729 05:00:28.434767   23138 logs.go:123] Gathering logs for kube-controller-manager [91b11ed29fb4] ...
	I0729 05:00:28.434779   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91b11ed29fb4"
	I0729 05:00:28.453469   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 05:00:28.453479   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 05:00:28.486871   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:00:28.486964   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:00:28.488171   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 05:00:28.488175   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:00:28.492243   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:00:28.492251   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:00:28.529158   23138 logs.go:123] Gathering logs for kube-apiserver [2334b557e5a6] ...
	I0729 05:00:28.529169   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2334b557e5a6"
	I0729 05:00:28.544240   23138 logs.go:123] Gathering logs for coredns [6c10a0abec53] ...
	I0729 05:00:28.544250   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c10a0abec53"
	I0729 05:00:28.555724   23138 logs.go:123] Gathering logs for kube-scheduler [2da429e82c93] ...
	I0729 05:00:28.555735   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2da429e82c93"
	I0729 05:00:28.571530   23138 logs.go:123] Gathering logs for kube-proxy [0200e3b5446e] ...
	I0729 05:00:28.571541   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0200e3b5446e"
	I0729 05:00:28.583426   23138 logs.go:123] Gathering logs for Docker ...
	I0729 05:00:28.583437   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:00:28.607779   23138 logs.go:123] Gathering logs for container status ...
	I0729 05:00:28.607787   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:00:28.619952   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:00:28.619964   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 05:00:28.619991   23138 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 05:00:28.619996   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:00:28.620000   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:00:28.620006   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:00:28.620009   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:00:38.623611   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:43.624260   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:43.624477   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:00:43.642296   23138 logs.go:276] 1 containers: [2334b557e5a6]
	I0729 05:00:43.642395   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:00:43.655461   23138 logs.go:276] 1 containers: [b4198e847f69]
	I0729 05:00:43.655525   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:00:43.667117   23138 logs.go:276] 2 containers: [6c10a0abec53 41dfa01e2ecc]
	I0729 05:00:43.667191   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:00:43.677702   23138 logs.go:276] 1 containers: [2da429e82c93]
	I0729 05:00:43.677762   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:00:43.688410   23138 logs.go:276] 1 containers: [0200e3b5446e]
	I0729 05:00:43.688481   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:00:43.699872   23138 logs.go:276] 1 containers: [91b11ed29fb4]
	I0729 05:00:43.699935   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:00:43.709185   23138 logs.go:276] 0 containers: []
	W0729 05:00:43.709199   23138 logs.go:278] No container was found matching "kindnet"
	I0729 05:00:43.709246   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:00:43.727305   23138 logs.go:276] 1 containers: [b62fcc7f4e8c]
	I0729 05:00:43.727321   23138 logs.go:123] Gathering logs for kube-proxy [0200e3b5446e] ...
	I0729 05:00:43.727328   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0200e3b5446e"
	I0729 05:00:43.738631   23138 logs.go:123] Gathering logs for container status ...
	I0729 05:00:43.738644   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:00:43.749835   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:00:43.749845   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:00:43.789633   23138 logs.go:123] Gathering logs for kube-apiserver [2334b557e5a6] ...
	I0729 05:00:43.789647   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2334b557e5a6"
	I0729 05:00:43.804211   23138 logs.go:123] Gathering logs for coredns [6c10a0abec53] ...
	I0729 05:00:43.804223   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c10a0abec53"
	I0729 05:00:43.815637   23138 logs.go:123] Gathering logs for coredns [41dfa01e2ecc] ...
	I0729 05:00:43.815649   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41dfa01e2ecc"
	I0729 05:00:43.827537   23138 logs.go:123] Gathering logs for kube-controller-manager [91b11ed29fb4] ...
	I0729 05:00:43.827551   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91b11ed29fb4"
	I0729 05:00:43.850802   23138 logs.go:123] Gathering logs for storage-provisioner [b62fcc7f4e8c] ...
	I0729 05:00:43.850815   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fcc7f4e8c"
	I0729 05:00:43.862614   23138 logs.go:123] Gathering logs for Docker ...
	I0729 05:00:43.862630   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:00:43.886556   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 05:00:43.886565   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 05:00:43.918334   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:00:43.918429   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:00:43.919628   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 05:00:43.919633   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:00:43.923699   23138 logs.go:123] Gathering logs for etcd [b4198e847f69] ...
	I0729 05:00:43.923705   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4198e847f69"
	I0729 05:00:43.939425   23138 logs.go:123] Gathering logs for kube-scheduler [2da429e82c93] ...
	I0729 05:00:43.939446   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2da429e82c93"
	I0729 05:00:43.955761   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:00:43.955770   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 05:00:43.955798   23138 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 05:00:43.955802   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:00:43.955806   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:00:43.955823   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:00:43.955828   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:00:53.959799   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:00:58.962242   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:00:58.962546   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:00:58.995681   23138 logs.go:276] 1 containers: [2334b557e5a6]
	I0729 05:00:58.995811   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:00:59.015285   23138 logs.go:276] 1 containers: [b4198e847f69]
	I0729 05:00:59.015415   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:00:59.029377   23138 logs.go:276] 2 containers: [6c10a0abec53 41dfa01e2ecc]
	I0729 05:00:59.029442   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:00:59.041831   23138 logs.go:276] 1 containers: [2da429e82c93]
	I0729 05:00:59.041906   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:00:59.052897   23138 logs.go:276] 1 containers: [0200e3b5446e]
	I0729 05:00:59.052978   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:00:59.066027   23138 logs.go:276] 1 containers: [91b11ed29fb4]
	I0729 05:00:59.066096   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:00:59.076101   23138 logs.go:276] 0 containers: []
	W0729 05:00:59.076113   23138 logs.go:278] No container was found matching "kindnet"
	I0729 05:00:59.076177   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:00:59.086453   23138 logs.go:276] 1 containers: [b62fcc7f4e8c]
	I0729 05:00:59.086467   23138 logs.go:123] Gathering logs for coredns [6c10a0abec53] ...
	I0729 05:00:59.086475   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c10a0abec53"
	I0729 05:00:59.098288   23138 logs.go:123] Gathering logs for coredns [41dfa01e2ecc] ...
	I0729 05:00:59.098301   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41dfa01e2ecc"
	I0729 05:00:59.109977   23138 logs.go:123] Gathering logs for kube-proxy [0200e3b5446e] ...
	I0729 05:00:59.109990   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0200e3b5446e"
	I0729 05:00:59.121839   23138 logs.go:123] Gathering logs for container status ...
	I0729 05:00:59.121849   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:00:59.133122   23138 logs.go:123] Gathering logs for kube-scheduler [2da429e82c93] ...
	I0729 05:00:59.133136   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2da429e82c93"
	I0729 05:00:59.148619   23138 logs.go:123] Gathering logs for kube-controller-manager [91b11ed29fb4] ...
	I0729 05:00:59.148629   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91b11ed29fb4"
	I0729 05:00:59.176226   23138 logs.go:123] Gathering logs for storage-provisioner [b62fcc7f4e8c] ...
	I0729 05:00:59.176237   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fcc7f4e8c"
	I0729 05:00:59.191667   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 05:00:59.191678   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 05:00:59.223137   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:00:59.223231   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:00:59.224450   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 05:00:59.224455   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:00:59.228381   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:00:59.228386   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:00:59.263233   23138 logs.go:123] Gathering logs for kube-apiserver [2334b557e5a6] ...
	I0729 05:00:59.263244   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2334b557e5a6"
	I0729 05:00:59.279380   23138 logs.go:123] Gathering logs for etcd [b4198e847f69] ...
	I0729 05:00:59.279395   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4198e847f69"
	I0729 05:00:59.293824   23138 logs.go:123] Gathering logs for Docker ...
	I0729 05:00:59.293835   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:00:59.318971   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:00:59.318980   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 05:00:59.319006   23138 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 05:00:59.319011   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:00:59.319015   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:00:59.319036   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:00:59.319041   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:01:09.321473   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:01:14.323811   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:01:14.324165   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:01:14.354757   23138 logs.go:276] 1 containers: [2334b557e5a6]
	I0729 05:01:14.354892   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:01:14.373595   23138 logs.go:276] 1 containers: [b4198e847f69]
	I0729 05:01:14.373691   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:01:14.387933   23138 logs.go:276] 2 containers: [6c10a0abec53 41dfa01e2ecc]
	I0729 05:01:14.388010   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:01:14.403950   23138 logs.go:276] 1 containers: [2da429e82c93]
	I0729 05:01:14.404031   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:01:14.414214   23138 logs.go:276] 1 containers: [0200e3b5446e]
	I0729 05:01:14.414288   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:01:14.424847   23138 logs.go:276] 1 containers: [91b11ed29fb4]
	I0729 05:01:14.424909   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:01:14.435329   23138 logs.go:276] 0 containers: []
	W0729 05:01:14.435341   23138 logs.go:278] No container was found matching "kindnet"
	I0729 05:01:14.435391   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:01:14.449504   23138 logs.go:276] 1 containers: [b62fcc7f4e8c]
	I0729 05:01:14.449519   23138 logs.go:123] Gathering logs for coredns [6c10a0abec53] ...
	I0729 05:01:14.449524   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c10a0abec53"
	I0729 05:01:14.461280   23138 logs.go:123] Gathering logs for kube-scheduler [2da429e82c93] ...
	I0729 05:01:14.461290   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2da429e82c93"
	I0729 05:01:14.475943   23138 logs.go:123] Gathering logs for Docker ...
	I0729 05:01:14.475954   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:01:14.501871   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 05:01:14.501882   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:01:14.506247   23138 logs.go:123] Gathering logs for etcd [b4198e847f69] ...
	I0729 05:01:14.506253   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4198e847f69"
	I0729 05:01:14.520362   23138 logs.go:123] Gathering logs for kube-apiserver [2334b557e5a6] ...
	I0729 05:01:14.520373   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2334b557e5a6"
	I0729 05:01:14.534729   23138 logs.go:123] Gathering logs for coredns [41dfa01e2ecc] ...
	I0729 05:01:14.534741   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41dfa01e2ecc"
	I0729 05:01:14.546274   23138 logs.go:123] Gathering logs for kube-proxy [0200e3b5446e] ...
	I0729 05:01:14.546285   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0200e3b5446e"
	I0729 05:01:14.557893   23138 logs.go:123] Gathering logs for kube-controller-manager [91b11ed29fb4] ...
	I0729 05:01:14.557907   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91b11ed29fb4"
	I0729 05:01:14.576310   23138 logs.go:123] Gathering logs for storage-provisioner [b62fcc7f4e8c] ...
	I0729 05:01:14.576322   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fcc7f4e8c"
	I0729 05:01:14.587882   23138 logs.go:123] Gathering logs for container status ...
	I0729 05:01:14.587894   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:01:14.599611   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 05:01:14.599622   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 05:01:14.632368   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:01:14.632467   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:01:14.633688   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:01:14.633698   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:01:14.668942   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:01:14.668952   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 05:01:14.668980   23138 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 05:01:14.668984   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:01:14.668988   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:01:14.668991   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:01:14.668995   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:01:24.673002   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:01:29.675222   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:01:29.675413   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:01:29.695409   23138 logs.go:276] 1 containers: [2334b557e5a6]
	I0729 05:01:29.695492   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:01:29.708103   23138 logs.go:276] 1 containers: [b4198e847f69]
	I0729 05:01:29.708177   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:01:29.719762   23138 logs.go:276] 2 containers: [6c10a0abec53 41dfa01e2ecc]
	I0729 05:01:29.719836   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:01:29.731265   23138 logs.go:276] 1 containers: [2da429e82c93]
	I0729 05:01:29.731331   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:01:29.742480   23138 logs.go:276] 1 containers: [0200e3b5446e]
	I0729 05:01:29.742554   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:01:29.756583   23138 logs.go:276] 1 containers: [91b11ed29fb4]
	I0729 05:01:29.756651   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:01:29.766859   23138 logs.go:276] 0 containers: []
	W0729 05:01:29.766869   23138 logs.go:278] No container was found matching "kindnet"
	I0729 05:01:29.766927   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:01:29.777512   23138 logs.go:276] 1 containers: [b62fcc7f4e8c]
	I0729 05:01:29.777527   23138 logs.go:123] Gathering logs for container status ...
	I0729 05:01:29.777533   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:01:29.788930   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 05:01:29.788941   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:01:29.793543   23138 logs.go:123] Gathering logs for etcd [b4198e847f69] ...
	I0729 05:01:29.793550   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4198e847f69"
	I0729 05:01:29.807873   23138 logs.go:123] Gathering logs for coredns [41dfa01e2ecc] ...
	I0729 05:01:29.807886   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41dfa01e2ecc"
	I0729 05:01:29.819907   23138 logs.go:123] Gathering logs for kube-controller-manager [91b11ed29fb4] ...
	I0729 05:01:29.819918   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91b11ed29fb4"
	I0729 05:01:29.837454   23138 logs.go:123] Gathering logs for Docker ...
	I0729 05:01:29.837468   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:01:29.861265   23138 logs.go:123] Gathering logs for kube-proxy [0200e3b5446e] ...
	I0729 05:01:29.861273   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0200e3b5446e"
	I0729 05:01:29.873988   23138 logs.go:123] Gathering logs for storage-provisioner [b62fcc7f4e8c] ...
	I0729 05:01:29.874000   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fcc7f4e8c"
	I0729 05:01:29.885646   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 05:01:29.885659   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 05:01:29.917822   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:01:29.917917   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:01:29.919145   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:01:29.919150   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:01:29.954117   23138 logs.go:123] Gathering logs for kube-apiserver [2334b557e5a6] ...
	I0729 05:01:29.954134   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2334b557e5a6"
	I0729 05:01:29.969312   23138 logs.go:123] Gathering logs for coredns [6c10a0abec53] ...
	I0729 05:01:29.969324   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c10a0abec53"
	I0729 05:01:29.984504   23138 logs.go:123] Gathering logs for kube-scheduler [2da429e82c93] ...
	I0729 05:01:29.984516   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2da429e82c93"
	I0729 05:01:30.002594   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:01:30.002607   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 05:01:30.002632   23138 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 05:01:30.002636   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:01:30.002640   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:01:30.002643   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:01:30.002646   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:01:40.005096   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:01:45.007319   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:01:45.007599   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:01:45.034806   23138 logs.go:276] 1 containers: [2334b557e5a6]
	I0729 05:01:45.034933   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:01:45.052342   23138 logs.go:276] 1 containers: [b4198e847f69]
	I0729 05:01:45.052439   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:01:45.066056   23138 logs.go:276] 4 containers: [1bb990caee39 fad55b622fed 6c10a0abec53 41dfa01e2ecc]
	I0729 05:01:45.066130   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:01:45.077851   23138 logs.go:276] 1 containers: [2da429e82c93]
	I0729 05:01:45.077921   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:01:45.088478   23138 logs.go:276] 1 containers: [0200e3b5446e]
	I0729 05:01:45.088537   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:01:45.098893   23138 logs.go:276] 1 containers: [91b11ed29fb4]
	I0729 05:01:45.098962   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:01:45.108605   23138 logs.go:276] 0 containers: []
	W0729 05:01:45.108617   23138 logs.go:278] No container was found matching "kindnet"
	I0729 05:01:45.108672   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:01:45.119155   23138 logs.go:276] 1 containers: [b62fcc7f4e8c]
	I0729 05:01:45.119174   23138 logs.go:123] Gathering logs for coredns [1bb990caee39] ...
	I0729 05:01:45.119180   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bb990caee39"
	I0729 05:01:45.130461   23138 logs.go:123] Gathering logs for kube-scheduler [2da429e82c93] ...
	I0729 05:01:45.130472   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2da429e82c93"
	I0729 05:01:45.147788   23138 logs.go:123] Gathering logs for kube-proxy [0200e3b5446e] ...
	I0729 05:01:45.147802   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0200e3b5446e"
	I0729 05:01:45.159647   23138 logs.go:123] Gathering logs for storage-provisioner [b62fcc7f4e8c] ...
	I0729 05:01:45.159659   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fcc7f4e8c"
	I0729 05:01:45.171069   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 05:01:45.171078   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 05:01:45.204296   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:01:45.204394   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:01:45.205669   23138 logs.go:123] Gathering logs for kube-apiserver [2334b557e5a6] ...
	I0729 05:01:45.205676   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2334b557e5a6"
	I0729 05:01:45.220058   23138 logs.go:123] Gathering logs for coredns [fad55b622fed] ...
	I0729 05:01:45.220069   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fad55b622fed"
	I0729 05:01:45.231319   23138 logs.go:123] Gathering logs for container status ...
	I0729 05:01:45.231330   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:01:45.243157   23138 logs.go:123] Gathering logs for coredns [6c10a0abec53] ...
	I0729 05:01:45.243169   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c10a0abec53"
	I0729 05:01:45.254903   23138 logs.go:123] Gathering logs for Docker ...
	I0729 05:01:45.254913   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:01:45.278290   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 05:01:45.278298   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:01:45.282389   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:01:45.282396   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:01:45.318298   23138 logs.go:123] Gathering logs for etcd [b4198e847f69] ...
	I0729 05:01:45.318314   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4198e847f69"
	I0729 05:01:45.332221   23138 logs.go:123] Gathering logs for coredns [41dfa01e2ecc] ...
	I0729 05:01:45.332232   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41dfa01e2ecc"
	I0729 05:01:45.343967   23138 logs.go:123] Gathering logs for kube-controller-manager [91b11ed29fb4] ...
	I0729 05:01:45.343978   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91b11ed29fb4"
	I0729 05:01:45.361265   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:01:45.361276   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 05:01:45.361320   23138 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 05:01:45.361324   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:01:45.361332   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:01:45.361335   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:01:45.361338   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:01:55.363354   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:02:00.365486   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:02:00.365721   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:02:00.388612   23138 logs.go:276] 1 containers: [2334b557e5a6]
	I0729 05:02:00.388756   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:02:00.404551   23138 logs.go:276] 1 containers: [b4198e847f69]
	I0729 05:02:00.404625   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:02:00.420700   23138 logs.go:276] 4 containers: [1bb990caee39 fad55b622fed 6c10a0abec53 41dfa01e2ecc]
	I0729 05:02:00.420770   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:02:00.431049   23138 logs.go:276] 1 containers: [2da429e82c93]
	I0729 05:02:00.431107   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:02:00.441168   23138 logs.go:276] 1 containers: [0200e3b5446e]
	I0729 05:02:00.441236   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:02:00.451691   23138 logs.go:276] 1 containers: [91b11ed29fb4]
	I0729 05:02:00.451760   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:02:00.461926   23138 logs.go:276] 0 containers: []
	W0729 05:02:00.461938   23138 logs.go:278] No container was found matching "kindnet"
	I0729 05:02:00.461987   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:02:00.472256   23138 logs.go:276] 1 containers: [b62fcc7f4e8c]
	I0729 05:02:00.472272   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 05:02:00.472277   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:02:00.477039   23138 logs.go:123] Gathering logs for kube-controller-manager [91b11ed29fb4] ...
	I0729 05:02:00.477045   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91b11ed29fb4"
	I0729 05:02:00.494803   23138 logs.go:123] Gathering logs for storage-provisioner [b62fcc7f4e8c] ...
	I0729 05:02:00.494816   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fcc7f4e8c"
	I0729 05:02:00.508271   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:02:00.508281   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:02:00.542734   23138 logs.go:123] Gathering logs for etcd [b4198e847f69] ...
	I0729 05:02:00.542745   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4198e847f69"
	I0729 05:02:00.556807   23138 logs.go:123] Gathering logs for coredns [41dfa01e2ecc] ...
	I0729 05:02:00.556819   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41dfa01e2ecc"
	I0729 05:02:00.569437   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 05:02:00.569448   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 05:02:00.602535   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:02:00.602628   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:02:00.603818   23138 logs.go:123] Gathering logs for kube-apiserver [2334b557e5a6] ...
	I0729 05:02:00.603822   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2334b557e5a6"
	I0729 05:02:00.621342   23138 logs.go:123] Gathering logs for coredns [1bb990caee39] ...
	I0729 05:02:00.621353   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bb990caee39"
	I0729 05:02:00.632823   23138 logs.go:123] Gathering logs for coredns [6c10a0abec53] ...
	I0729 05:02:00.632834   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c10a0abec53"
	I0729 05:02:00.644555   23138 logs.go:123] Gathering logs for kube-scheduler [2da429e82c93] ...
	I0729 05:02:00.644569   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2da429e82c93"
	I0729 05:02:00.658637   23138 logs.go:123] Gathering logs for container status ...
	I0729 05:02:00.658648   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:02:00.670629   23138 logs.go:123] Gathering logs for coredns [fad55b622fed] ...
	I0729 05:02:00.670640   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fad55b622fed"
	I0729 05:02:00.682363   23138 logs.go:123] Gathering logs for kube-proxy [0200e3b5446e] ...
	I0729 05:02:00.682377   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0200e3b5446e"
	I0729 05:02:00.694766   23138 logs.go:123] Gathering logs for Docker ...
	I0729 05:02:00.694775   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:02:00.718314   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:02:00.718323   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 05:02:00.718350   23138 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 05:02:00.718355   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:02:00.718363   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:02:00.718366   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:02:00.718369   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:02:10.722345   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:02:15.724837   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:02:15.725332   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:02:15.771661   23138 logs.go:276] 1 containers: [2334b557e5a6]
	I0729 05:02:15.771800   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:02:15.792394   23138 logs.go:276] 1 containers: [b4198e847f69]
	I0729 05:02:15.792494   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:02:15.807046   23138 logs.go:276] 4 containers: [1bb990caee39 fad55b622fed 6c10a0abec53 41dfa01e2ecc]
	I0729 05:02:15.807125   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:02:15.818796   23138 logs.go:276] 1 containers: [2da429e82c93]
	I0729 05:02:15.818864   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:02:15.829349   23138 logs.go:276] 1 containers: [0200e3b5446e]
	I0729 05:02:15.829422   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:02:15.840592   23138 logs.go:276] 1 containers: [91b11ed29fb4]
	I0729 05:02:15.840663   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:02:15.851155   23138 logs.go:276] 0 containers: []
	W0729 05:02:15.851168   23138 logs.go:278] No container was found matching "kindnet"
	I0729 05:02:15.851224   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:02:15.862639   23138 logs.go:276] 1 containers: [b62fcc7f4e8c]
	I0729 05:02:15.862658   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 05:02:15.862665   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:02:15.866729   23138 logs.go:123] Gathering logs for coredns [41dfa01e2ecc] ...
	I0729 05:02:15.866735   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41dfa01e2ecc"
	I0729 05:02:15.878682   23138 logs.go:123] Gathering logs for storage-provisioner [b62fcc7f4e8c] ...
	I0729 05:02:15.878695   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fcc7f4e8c"
	I0729 05:02:15.890179   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 05:02:15.890189   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 05:02:15.921173   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:02:15.921266   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:02:15.922463   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:02:15.922466   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:02:15.992523   23138 logs.go:123] Gathering logs for etcd [b4198e847f69] ...
	I0729 05:02:15.992536   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4198e847f69"
	I0729 05:02:16.006807   23138 logs.go:123] Gathering logs for coredns [fad55b622fed] ...
	I0729 05:02:16.006818   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fad55b622fed"
	I0729 05:02:16.018526   23138 logs.go:123] Gathering logs for kube-scheduler [2da429e82c93] ...
	I0729 05:02:16.018538   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2da429e82c93"
	I0729 05:02:16.034575   23138 logs.go:123] Gathering logs for container status ...
	I0729 05:02:16.034588   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:02:16.046399   23138 logs.go:123] Gathering logs for kube-apiserver [2334b557e5a6] ...
	I0729 05:02:16.046413   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2334b557e5a6"
	I0729 05:02:16.060535   23138 logs.go:123] Gathering logs for coredns [1bb990caee39] ...
	I0729 05:02:16.060546   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bb990caee39"
	I0729 05:02:16.072291   23138 logs.go:123] Gathering logs for coredns [6c10a0abec53] ...
	I0729 05:02:16.072304   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c10a0abec53"
	I0729 05:02:16.083824   23138 logs.go:123] Gathering logs for Docker ...
	I0729 05:02:16.083837   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:02:16.107989   23138 logs.go:123] Gathering logs for kube-proxy [0200e3b5446e] ...
	I0729 05:02:16.107997   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0200e3b5446e"
	I0729 05:02:16.119675   23138 logs.go:123] Gathering logs for kube-controller-manager [91b11ed29fb4] ...
	I0729 05:02:16.119689   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91b11ed29fb4"
	I0729 05:02:16.136804   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:02:16.136815   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 05:02:16.136840   23138 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 05:02:16.136844   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:02:16.136849   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:02:16.136852   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:02:16.136856   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:02:26.140816   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:02:31.142983   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:02:31.143205   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:02:31.165002   23138 logs.go:276] 1 containers: [2334b557e5a6]
	I0729 05:02:31.165102   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:02:31.182585   23138 logs.go:276] 1 containers: [b4198e847f69]
	I0729 05:02:31.182664   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:02:31.194664   23138 logs.go:276] 4 containers: [1bb990caee39 fad55b622fed 6c10a0abec53 41dfa01e2ecc]
	I0729 05:02:31.194734   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:02:31.204798   23138 logs.go:276] 1 containers: [2da429e82c93]
	I0729 05:02:31.204858   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:02:31.221126   23138 logs.go:276] 1 containers: [0200e3b5446e]
	I0729 05:02:31.221197   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:02:31.231881   23138 logs.go:276] 1 containers: [91b11ed29fb4]
	I0729 05:02:31.231950   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:02:31.242409   23138 logs.go:276] 0 containers: []
	W0729 05:02:31.242420   23138 logs.go:278] No container was found matching "kindnet"
	I0729 05:02:31.242474   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:02:31.252714   23138 logs.go:276] 1 containers: [b62fcc7f4e8c]
	I0729 05:02:31.252731   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:02:31.252736   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:02:31.287147   23138 logs.go:123] Gathering logs for coredns [1bb990caee39] ...
	I0729 05:02:31.287160   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bb990caee39"
	I0729 05:02:31.299102   23138 logs.go:123] Gathering logs for Docker ...
	I0729 05:02:31.299116   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:02:31.324689   23138 logs.go:123] Gathering logs for coredns [fad55b622fed] ...
	I0729 05:02:31.324702   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fad55b622fed"
	I0729 05:02:31.336321   23138 logs.go:123] Gathering logs for kube-proxy [0200e3b5446e] ...
	I0729 05:02:31.336331   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0200e3b5446e"
	I0729 05:02:31.348329   23138 logs.go:123] Gathering logs for etcd [b4198e847f69] ...
	I0729 05:02:31.348341   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4198e847f69"
	I0729 05:02:31.365995   23138 logs.go:123] Gathering logs for coredns [41dfa01e2ecc] ...
	I0729 05:02:31.366010   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41dfa01e2ecc"
	I0729 05:02:31.381750   23138 logs.go:123] Gathering logs for kube-controller-manager [91b11ed29fb4] ...
	I0729 05:02:31.381759   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91b11ed29fb4"
	I0729 05:02:31.400033   23138 logs.go:123] Gathering logs for storage-provisioner [b62fcc7f4e8c] ...
	I0729 05:02:31.400045   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fcc7f4e8c"
	I0729 05:02:31.411632   23138 logs.go:123] Gathering logs for container status ...
	I0729 05:02:31.411643   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:02:31.423601   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 05:02:31.423615   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 05:02:31.455474   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:02:31.455574   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:02:31.456853   23138 logs.go:123] Gathering logs for kube-apiserver [2334b557e5a6] ...
	I0729 05:02:31.456860   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2334b557e5a6"
	I0729 05:02:31.473144   23138 logs.go:123] Gathering logs for kube-scheduler [2da429e82c93] ...
	I0729 05:02:31.473163   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2da429e82c93"
	I0729 05:02:31.487460   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 05:02:31.487470   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:02:31.492738   23138 logs.go:123] Gathering logs for coredns [6c10a0abec53] ...
	I0729 05:02:31.492746   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c10a0abec53"
	I0729 05:02:31.504657   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:02:31.504668   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 05:02:31.504696   23138 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 05:02:31.504701   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:02:31.504717   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:02:31.504723   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:02:31.504766   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:02:41.508729   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:02:46.510969   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:02:46.511108   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:02:46.529184   23138 logs.go:276] 1 containers: [2334b557e5a6]
	I0729 05:02:46.529284   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:02:46.542603   23138 logs.go:276] 1 containers: [b4198e847f69]
	I0729 05:02:46.542680   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:02:46.554044   23138 logs.go:276] 4 containers: [1bb990caee39 fad55b622fed 6c10a0abec53 41dfa01e2ecc]
	I0729 05:02:46.554111   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:02:46.564861   23138 logs.go:276] 1 containers: [2da429e82c93]
	I0729 05:02:46.564925   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:02:46.575315   23138 logs.go:276] 1 containers: [0200e3b5446e]
	I0729 05:02:46.575379   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:02:46.585819   23138 logs.go:276] 1 containers: [91b11ed29fb4]
	I0729 05:02:46.585882   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:02:46.595932   23138 logs.go:276] 0 containers: []
	W0729 05:02:46.595943   23138 logs.go:278] No container was found matching "kindnet"
	I0729 05:02:46.595999   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:02:46.610032   23138 logs.go:276] 1 containers: [b62fcc7f4e8c]
	I0729 05:02:46.610052   23138 logs.go:123] Gathering logs for coredns [1bb990caee39] ...
	I0729 05:02:46.610058   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bb990caee39"
	I0729 05:02:46.621632   23138 logs.go:123] Gathering logs for kube-proxy [0200e3b5446e] ...
	I0729 05:02:46.621642   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0200e3b5446e"
	I0729 05:02:46.633796   23138 logs.go:123] Gathering logs for kube-controller-manager [91b11ed29fb4] ...
	I0729 05:02:46.633807   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91b11ed29fb4"
	I0729 05:02:46.654043   23138 logs.go:123] Gathering logs for storage-provisioner [b62fcc7f4e8c] ...
	I0729 05:02:46.654054   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fcc7f4e8c"
	I0729 05:02:46.665604   23138 logs.go:123] Gathering logs for container status ...
	I0729 05:02:46.665614   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:02:46.677084   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 05:02:46.677096   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:02:46.681273   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:02:46.681280   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:02:46.717893   23138 logs.go:123] Gathering logs for etcd [b4198e847f69] ...
	I0729 05:02:46.717910   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4198e847f69"
	I0729 05:02:46.733339   23138 logs.go:123] Gathering logs for kube-scheduler [2da429e82c93] ...
	I0729 05:02:46.733352   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2da429e82c93"
	I0729 05:02:46.747932   23138 logs.go:123] Gathering logs for Docker ...
	I0729 05:02:46.747943   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:02:46.772119   23138 logs.go:123] Gathering logs for coredns [fad55b622fed] ...
	I0729 05:02:46.772128   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fad55b622fed"
	I0729 05:02:46.784193   23138 logs.go:123] Gathering logs for coredns [6c10a0abec53] ...
	I0729 05:02:46.784204   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c10a0abec53"
	I0729 05:02:46.795619   23138 logs.go:123] Gathering logs for coredns [41dfa01e2ecc] ...
	I0729 05:02:46.795630   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41dfa01e2ecc"
	I0729 05:02:46.807210   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 05:02:46.807221   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 05:02:46.840890   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:02:46.840984   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:02:46.842175   23138 logs.go:123] Gathering logs for kube-apiserver [2334b557e5a6] ...
	I0729 05:02:46.842180   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2334b557e5a6"
	I0729 05:02:46.858707   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:02:46.858718   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 05:02:46.858744   23138 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 05:02:46.858748   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:02:46.858753   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:02:46.858759   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:02:46.858762   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:02:56.862731   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:03:01.865040   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:03:01.865224   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:03:01.883169   23138 logs.go:276] 1 containers: [2334b557e5a6]
	I0729 05:03:01.883252   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:03:01.896686   23138 logs.go:276] 1 containers: [b4198e847f69]
	I0729 05:03:01.896758   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:03:01.909129   23138 logs.go:276] 4 containers: [1bb990caee39 fad55b622fed 6c10a0abec53 41dfa01e2ecc]
	I0729 05:03:01.909202   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:03:01.920013   23138 logs.go:276] 1 containers: [2da429e82c93]
	I0729 05:03:01.920076   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:03:01.930724   23138 logs.go:276] 1 containers: [0200e3b5446e]
	I0729 05:03:01.930798   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:03:01.941025   23138 logs.go:276] 1 containers: [91b11ed29fb4]
	I0729 05:03:01.941091   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:03:01.951661   23138 logs.go:276] 0 containers: []
	W0729 05:03:01.951670   23138 logs.go:278] No container was found matching "kindnet"
	I0729 05:03:01.951722   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:03:01.962361   23138 logs.go:276] 1 containers: [b62fcc7f4e8c]
	I0729 05:03:01.962379   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 05:03:01.962384   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:03:01.966948   23138 logs.go:123] Gathering logs for coredns [1bb990caee39] ...
	I0729 05:03:01.966957   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bb990caee39"
	I0729 05:03:01.982605   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 05:03:01.982619   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 05:03:02.014240   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:03:02.014333   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:03:02.015526   23138 logs.go:123] Gathering logs for kube-apiserver [2334b557e5a6] ...
	I0729 05:03:02.015531   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2334b557e5a6"
	I0729 05:03:02.030408   23138 logs.go:123] Gathering logs for coredns [fad55b622fed] ...
	I0729 05:03:02.030418   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fad55b622fed"
	I0729 05:03:02.041724   23138 logs.go:123] Gathering logs for storage-provisioner [b62fcc7f4e8c] ...
	I0729 05:03:02.041735   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fcc7f4e8c"
	I0729 05:03:02.055501   23138 logs.go:123] Gathering logs for Docker ...
	I0729 05:03:02.055511   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:03:02.080877   23138 logs.go:123] Gathering logs for container status ...
	I0729 05:03:02.080888   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:03:02.093808   23138 logs.go:123] Gathering logs for etcd [b4198e847f69] ...
	I0729 05:03:02.093823   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4198e847f69"
	I0729 05:03:02.107756   23138 logs.go:123] Gathering logs for coredns [6c10a0abec53] ...
	I0729 05:03:02.107767   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c10a0abec53"
	I0729 05:03:02.119207   23138 logs.go:123] Gathering logs for coredns [41dfa01e2ecc] ...
	I0729 05:03:02.119220   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41dfa01e2ecc"
	I0729 05:03:02.134801   23138 logs.go:123] Gathering logs for kube-scheduler [2da429e82c93] ...
	I0729 05:03:02.134812   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2da429e82c93"
	I0729 05:03:02.149531   23138 logs.go:123] Gathering logs for kube-proxy [0200e3b5446e] ...
	I0729 05:03:02.149545   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0200e3b5446e"
	I0729 05:03:02.161584   23138 logs.go:123] Gathering logs for kube-controller-manager [91b11ed29fb4] ...
	I0729 05:03:02.161595   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91b11ed29fb4"
	I0729 05:03:02.179370   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:03:02.179382   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:03:02.215291   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:03:02.215302   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 05:03:02.215331   23138 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 05:03:02.215336   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:03:02.215341   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:03:02.215345   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:03:02.215348   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:03:12.218948   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:03:17.221190   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:03:17.221451   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 05:03:17.243293   23138 logs.go:276] 1 containers: [2334b557e5a6]
	I0729 05:03:17.243411   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 05:03:17.258602   23138 logs.go:276] 1 containers: [b4198e847f69]
	I0729 05:03:17.258687   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 05:03:17.271967   23138 logs.go:276] 4 containers: [1bb990caee39 fad55b622fed 6c10a0abec53 41dfa01e2ecc]
	I0729 05:03:17.272034   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 05:03:17.283458   23138 logs.go:276] 1 containers: [2da429e82c93]
	I0729 05:03:17.283524   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 05:03:17.293754   23138 logs.go:276] 1 containers: [0200e3b5446e]
	I0729 05:03:17.293825   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 05:03:17.304474   23138 logs.go:276] 1 containers: [91b11ed29fb4]
	I0729 05:03:17.304544   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 05:03:17.315015   23138 logs.go:276] 0 containers: []
	W0729 05:03:17.315026   23138 logs.go:278] No container was found matching "kindnet"
	I0729 05:03:17.315080   23138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 05:03:17.325561   23138 logs.go:276] 1 containers: [b62fcc7f4e8c]
	I0729 05:03:17.325581   23138 logs.go:123] Gathering logs for kubelet ...
	I0729 05:03:17.325586   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 05:03:17.357188   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:03:17.357282   23138 logs.go:138] Found kubelet problem: Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:03:17.358512   23138 logs.go:123] Gathering logs for storage-provisioner [b62fcc7f4e8c] ...
	I0729 05:03:17.358519   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62fcc7f4e8c"
	I0729 05:03:17.370520   23138 logs.go:123] Gathering logs for Docker ...
	I0729 05:03:17.370532   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 05:03:17.396460   23138 logs.go:123] Gathering logs for container status ...
	I0729 05:03:17.396471   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 05:03:17.408266   23138 logs.go:123] Gathering logs for dmesg ...
	I0729 05:03:17.408275   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 05:03:17.412465   23138 logs.go:123] Gathering logs for describe nodes ...
	I0729 05:03:17.412471   23138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 05:03:17.448127   23138 logs.go:123] Gathering logs for coredns [1bb990caee39] ...
	I0729 05:03:17.448142   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bb990caee39"
	I0729 05:03:17.466067   23138 logs.go:123] Gathering logs for kube-controller-manager [91b11ed29fb4] ...
	I0729 05:03:17.466085   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91b11ed29fb4"
	I0729 05:03:17.499038   23138 logs.go:123] Gathering logs for kube-apiserver [2334b557e5a6] ...
	I0729 05:03:17.499059   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2334b557e5a6"
	I0729 05:03:17.520568   23138 logs.go:123] Gathering logs for etcd [b4198e847f69] ...
	I0729 05:03:17.520585   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4198e847f69"
	I0729 05:03:17.534865   23138 logs.go:123] Gathering logs for coredns [fad55b622fed] ...
	I0729 05:03:17.534877   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fad55b622fed"
	I0729 05:03:17.546374   23138 logs.go:123] Gathering logs for coredns [41dfa01e2ecc] ...
	I0729 05:03:17.546385   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41dfa01e2ecc"
	I0729 05:03:17.558202   23138 logs.go:123] Gathering logs for kube-scheduler [2da429e82c93] ...
	I0729 05:03:17.558218   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2da429e82c93"
	I0729 05:03:17.580498   23138 logs.go:123] Gathering logs for kube-proxy [0200e3b5446e] ...
	I0729 05:03:17.580513   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0200e3b5446e"
	I0729 05:03:17.593976   23138 logs.go:123] Gathering logs for coredns [6c10a0abec53] ...
	I0729 05:03:17.593986   23138 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c10a0abec53"
	I0729 05:03:17.605227   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:03:17.605237   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 05:03:17.605268   23138 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 05:03:17.605273   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: W0729 11:59:41.595428   10541 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	W0729 05:03:17.605277   23138 out.go:239]   Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	  Jul 29 11:59:41 stopped-upgrade-370000 kubelet[10541]: E0729 11:59:41.595464   10541 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-370000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-370000' and this object
	I0729 05:03:17.605282   23138 out.go:304] Setting ErrFile to fd 2...
	I0729 05:03:17.605285   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:03:27.609292   23138 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 05:03:32.609904   23138 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 05:03:32.614286   23138 out.go:177] 
	W0729 05:03:32.618095   23138 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 05:03:32.618100   23138 out.go:239] * 
	* 
	W0729 05:03:32.618579   23138 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:03:32.629212   23138 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-370000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (591.08s)

                                                
                                    
x
+
TestPause/serial/Start (9.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-031000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-031000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.8162515s)

                                                
                                                
-- stdout --
	* [pause-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-031000" primary control-plane node in "pause-031000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-031000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-031000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-031000 -n pause-031000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-031000 -n pause-031000: exit status 7 (66.650708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-031000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-911000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-911000 --driver=qemu2 : exit status 80 (9.928132667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-911000" primary control-plane node in "NoKubernetes-911000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-911000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-911000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-911000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000: exit status 7 (50.935834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --driver=qemu2 : exit status 80 (7.394312416s)

                                                
                                                
-- stdout --
	* [NoKubernetes-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-911000
	* Restarting existing qemu2 VM for "NoKubernetes-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-911000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000: exit status 7 (49.902583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --driver=qemu2 : exit status 80 (7.506764958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-911000
	* Restarting existing qemu2 VM for "NoKubernetes-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-911000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000: exit status 7 (34.009958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (7.54s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.74s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19338
- KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1457487220/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.74s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.57s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19338
- KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1443802851/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-911000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-911000 --driver=qemu2 : exit status 80 (5.283915083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-911000
	* Restarting existing qemu2 VM for "NoKubernetes-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-911000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-911000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000: exit status 7 (70.212084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.798053458s)

                                                
                                                
-- stdout --
	* [auto-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-394000" primary control-plane node in "auto-394000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-394000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:05:20.850653   23791 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:05:20.850803   23791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:05:20.850806   23791 out.go:304] Setting ErrFile to fd 2...
	I0729 05:05:20.850808   23791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:05:20.850943   23791 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:05:20.852023   23791 out.go:298] Setting JSON to false
	I0729 05:05:20.868032   23791 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11089,"bootTime":1722243631,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:05:20.868106   23791 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:05:20.874901   23791 out.go:177] * [auto-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:05:20.881842   23791 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:05:20.881915   23791 notify.go:220] Checking for updates...
	I0729 05:05:20.889909   23791 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:05:20.892800   23791 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:05:20.895844   23791 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:05:20.898880   23791 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:05:20.901815   23791 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:05:20.905255   23791 config.go:182] Loaded profile config "cert-expiration-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:05:20.905327   23791 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:05:20.905367   23791 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:05:20.909888   23791 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:05:20.916856   23791 start.go:297] selected driver: qemu2
	I0729 05:05:20.916863   23791 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:05:20.916874   23791 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:05:20.919425   23791 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:05:20.922882   23791 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:05:20.926864   23791 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:05:20.926908   23791 cni.go:84] Creating CNI manager for ""
	I0729 05:05:20.926915   23791 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:05:20.926919   23791 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 05:05:20.926939   23791 start.go:340] cluster config:
	{Name:auto-394000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:05:20.930769   23791 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:05:20.937868   23791 out.go:177] * Starting "auto-394000" primary control-plane node in "auto-394000" cluster
	I0729 05:05:20.941860   23791 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:05:20.941878   23791 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:05:20.941891   23791 cache.go:56] Caching tarball of preloaded images
	I0729 05:05:20.941957   23791 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:05:20.941971   23791 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:05:20.942040   23791 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/auto-394000/config.json ...
	I0729 05:05:20.942052   23791 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/auto-394000/config.json: {Name:mk13a2e31f3ac0fb46ab33cc39de4f13da1eb375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:05:20.942296   23791 start.go:360] acquireMachinesLock for auto-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:05:20.942333   23791 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "auto-394000"
	I0729 05:05:20.942345   23791 start.go:93] Provisioning new machine with config: &{Name:auto-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:05:20.942371   23791 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:05:20.948848   23791 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:05:20.967246   23791 start.go:159] libmachine.API.Create for "auto-394000" (driver="qemu2")
	I0729 05:05:20.967277   23791 client.go:168] LocalClient.Create starting
	I0729 05:05:20.967343   23791 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:05:20.967378   23791 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:20.967387   23791 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:20.967429   23791 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:05:20.967453   23791 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:20.967461   23791 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:20.967872   23791 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:05:21.118159   23791 main.go:141] libmachine: Creating SSH key...
	I0729 05:05:21.203549   23791 main.go:141] libmachine: Creating Disk image...
	I0729 05:05:21.203554   23791 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:05:21.203772   23791 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/disk.qcow2
	I0729 05:05:21.212988   23791 main.go:141] libmachine: STDOUT: 
	I0729 05:05:21.213005   23791 main.go:141] libmachine: STDERR: 
	I0729 05:05:21.213055   23791 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/disk.qcow2 +20000M
	I0729 05:05:21.220887   23791 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:05:21.220899   23791 main.go:141] libmachine: STDERR: 
	I0729 05:05:21.220909   23791 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/disk.qcow2
	I0729 05:05:21.220914   23791 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:05:21.220930   23791 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:05:21.220956   23791 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:91:01:93:f3:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/disk.qcow2
	I0729 05:05:21.222509   23791 main.go:141] libmachine: STDOUT: 
	I0729 05:05:21.222526   23791 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:05:21.222546   23791 client.go:171] duration metric: took 255.268917ms to LocalClient.Create
	I0729 05:05:23.224682   23791 start.go:128] duration metric: took 2.282323291s to createHost
	I0729 05:05:23.224731   23791 start.go:83] releasing machines lock for "auto-394000", held for 2.282431s
	W0729 05:05:23.224799   23791 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:23.236047   23791 out.go:177] * Deleting "auto-394000" in qemu2 ...
	W0729 05:05:23.266742   23791 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:23.266766   23791 start.go:729] Will try again in 5 seconds ...
	I0729 05:05:28.268911   23791 start.go:360] acquireMachinesLock for auto-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:05:28.269563   23791 start.go:364] duration metric: took 524µs to acquireMachinesLock for "auto-394000"
	I0729 05:05:28.269719   23791 start.go:93] Provisioning new machine with config: &{Name:auto-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:05:28.270023   23791 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:05:28.286680   23791 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:05:28.335724   23791 start.go:159] libmachine.API.Create for "auto-394000" (driver="qemu2")
	I0729 05:05:28.335768   23791 client.go:168] LocalClient.Create starting
	I0729 05:05:28.335869   23791 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:05:28.335928   23791 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:28.335946   23791 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:28.336008   23791 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:05:28.336054   23791 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:28.336068   23791 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:28.336565   23791 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:05:28.496174   23791 main.go:141] libmachine: Creating SSH key...
	I0729 05:05:28.540052   23791 main.go:141] libmachine: Creating Disk image...
	I0729 05:05:28.540061   23791 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:05:28.540290   23791 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/disk.qcow2
	I0729 05:05:28.549443   23791 main.go:141] libmachine: STDOUT: 
	I0729 05:05:28.549462   23791 main.go:141] libmachine: STDERR: 
	I0729 05:05:28.549514   23791 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/disk.qcow2 +20000M
	I0729 05:05:28.557248   23791 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:05:28.557263   23791 main.go:141] libmachine: STDERR: 
	I0729 05:05:28.557276   23791 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/disk.qcow2
	I0729 05:05:28.557279   23791 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:05:28.557310   23791 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:05:28.557340   23791 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:11:9d:10:d4:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/auto-394000/disk.qcow2
	I0729 05:05:28.558934   23791 main.go:141] libmachine: STDOUT: 
	I0729 05:05:28.558949   23791 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:05:28.558960   23791 client.go:171] duration metric: took 223.192833ms to LocalClient.Create
	I0729 05:05:30.561179   23791 start.go:128] duration metric: took 2.291094625s to createHost
	I0729 05:05:30.561244   23791 start.go:83] releasing machines lock for "auto-394000", held for 2.29169475s
	W0729 05:05:30.561669   23791 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:30.574312   23791 out.go:177] 
	W0729 05:05:30.579449   23791 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:05:30.579480   23791 out.go:239] * 
	* 
	W0729 05:05:30.582137   23791 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:05:30.605266   23791 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.835538792s)

                                                
                                                
-- stdout --
	* [kindnet-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-394000" primary control-plane node in "kindnet-394000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-394000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:05:33.001145   23902 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:05:33.001553   23902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:05:33.001557   23902 out.go:304] Setting ErrFile to fd 2...
	I0729 05:05:33.001559   23902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:05:33.001710   23902 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:05:33.002937   23902 out.go:298] Setting JSON to false
	I0729 05:05:33.019476   23902 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11102,"bootTime":1722243631,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:05:33.019535   23902 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:05:33.024583   23902 out.go:177] * [kindnet-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:05:33.032521   23902 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:05:33.032608   23902 notify.go:220] Checking for updates...
	I0729 05:05:33.040427   23902 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:05:33.044477   23902 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:05:33.047435   23902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:05:33.050520   23902 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:05:33.053458   23902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:05:33.055239   23902 config.go:182] Loaded profile config "cert-expiration-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:05:33.055316   23902 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:05:33.055368   23902 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:05:33.059495   23902 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:05:33.066339   23902 start.go:297] selected driver: qemu2
	I0729 05:05:33.066347   23902 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:05:33.066356   23902 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:05:33.068619   23902 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:05:33.071402   23902 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:05:33.075558   23902 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:05:33.075606   23902 cni.go:84] Creating CNI manager for "kindnet"
	I0729 05:05:33.075611   23902 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 05:05:33.075647   23902 start.go:340] cluster config:
	{Name:kindnet-394000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:05:33.079343   23902 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:05:33.087455   23902 out.go:177] * Starting "kindnet-394000" primary control-plane node in "kindnet-394000" cluster
	I0729 05:05:33.091491   23902 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:05:33.091507   23902 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:05:33.091520   23902 cache.go:56] Caching tarball of preloaded images
	I0729 05:05:33.091590   23902 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:05:33.091597   23902 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:05:33.091675   23902 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/kindnet-394000/config.json ...
	I0729 05:05:33.091704   23902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/kindnet-394000/config.json: {Name:mkf1d15543d2f8b41aa7883028420cbb8c218b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:05:33.091947   23902 start.go:360] acquireMachinesLock for kindnet-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:05:33.091984   23902 start.go:364] duration metric: took 30.292µs to acquireMachinesLock for "kindnet-394000"
	I0729 05:05:33.091996   23902 start.go:93] Provisioning new machine with config: &{Name:kindnet-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:05:33.092025   23902 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:05:33.100500   23902 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:05:33.118966   23902 start.go:159] libmachine.API.Create for "kindnet-394000" (driver="qemu2")
	I0729 05:05:33.119000   23902 client.go:168] LocalClient.Create starting
	I0729 05:05:33.119071   23902 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:05:33.119102   23902 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:33.119113   23902 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:33.119159   23902 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:05:33.119184   23902 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:33.119194   23902 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:33.119568   23902 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:05:33.269866   23902 main.go:141] libmachine: Creating SSH key...
	I0729 05:05:33.370851   23902 main.go:141] libmachine: Creating Disk image...
	I0729 05:05:33.370856   23902 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:05:33.371063   23902 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/disk.qcow2
	I0729 05:05:33.380110   23902 main.go:141] libmachine: STDOUT: 
	I0729 05:05:33.380127   23902 main.go:141] libmachine: STDERR: 
	I0729 05:05:33.380169   23902 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/disk.qcow2 +20000M
	I0729 05:05:33.387927   23902 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:05:33.387941   23902 main.go:141] libmachine: STDERR: 
	I0729 05:05:33.387960   23902 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/disk.qcow2
	I0729 05:05:33.387964   23902 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:05:33.387978   23902 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:05:33.388008   23902 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:93:79:16:71:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/disk.qcow2
	I0729 05:05:33.389588   23902 main.go:141] libmachine: STDOUT: 
	I0729 05:05:33.389603   23902 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:05:33.389620   23902 client.go:171] duration metric: took 270.619667ms to LocalClient.Create
	I0729 05:05:35.391758   23902 start.go:128] duration metric: took 2.299753583s to createHost
	I0729 05:05:35.391859   23902 start.go:83] releasing machines lock for "kindnet-394000", held for 2.29988425s
	W0729 05:05:35.391925   23902 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:35.402918   23902 out.go:177] * Deleting "kindnet-394000" in qemu2 ...
	W0729 05:05:35.433459   23902 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:35.433476   23902 start.go:729] Will try again in 5 seconds ...
	I0729 05:05:40.435543   23902 start.go:360] acquireMachinesLock for kindnet-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:05:40.435990   23902 start.go:364] duration metric: took 312.25µs to acquireMachinesLock for "kindnet-394000"
	I0729 05:05:40.436134   23902 start.go:93] Provisioning new machine with config: &{Name:kindnet-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:05:40.436464   23902 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:05:40.452185   23902 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:05:40.503085   23902 start.go:159] libmachine.API.Create for "kindnet-394000" (driver="qemu2")
	I0729 05:05:40.503134   23902 client.go:168] LocalClient.Create starting
	I0729 05:05:40.503283   23902 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:05:40.503351   23902 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:40.503366   23902 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:40.503425   23902 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:05:40.503475   23902 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:40.503487   23902 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:40.504045   23902 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:05:40.665477   23902 main.go:141] libmachine: Creating SSH key...
	I0729 05:05:40.743472   23902 main.go:141] libmachine: Creating Disk image...
	I0729 05:05:40.743480   23902 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:05:40.743689   23902 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/disk.qcow2
	I0729 05:05:40.753022   23902 main.go:141] libmachine: STDOUT: 
	I0729 05:05:40.753039   23902 main.go:141] libmachine: STDERR: 
	I0729 05:05:40.753096   23902 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/disk.qcow2 +20000M
	I0729 05:05:40.760915   23902 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:05:40.760931   23902 main.go:141] libmachine: STDERR: 
	I0729 05:05:40.760940   23902 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/disk.qcow2
	I0729 05:05:40.760945   23902 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:05:40.760956   23902 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:05:40.760989   23902 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:cf:de:0f:4b:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kindnet-394000/disk.qcow2
	I0729 05:05:40.762641   23902 main.go:141] libmachine: STDOUT: 
	I0729 05:05:40.762656   23902 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:05:40.762680   23902 client.go:171] duration metric: took 259.533917ms to LocalClient.Create
	I0729 05:05:42.764813   23902 start.go:128] duration metric: took 2.328362083s to createHost
	I0729 05:05:42.764875   23902 start.go:83] releasing machines lock for "kindnet-394000", held for 2.328904875s
	W0729 05:05:42.765274   23902 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:42.774939   23902 out.go:177] 
	W0729 05:05:42.781866   23902 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:05:42.781890   23902 out.go:239] * 
	* 
	W0729 05:05:42.784808   23902 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:05:42.793881   23902 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.853555541s)

                                                
                                                
-- stdout --
	* [flannel-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-394000" primary control-plane node in "flannel-394000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-394000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:05:45.137582   24015 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:05:45.137728   24015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:05:45.137731   24015 out.go:304] Setting ErrFile to fd 2...
	I0729 05:05:45.137734   24015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:05:45.137872   24015 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:05:45.138874   24015 out.go:298] Setting JSON to false
	I0729 05:05:45.154831   24015 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11114,"bootTime":1722243631,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:05:45.154898   24015 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:05:45.160003   24015 out.go:177] * [flannel-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:05:45.167697   24015 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:05:45.167734   24015 notify.go:220] Checking for updates...
	I0729 05:05:45.174897   24015 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:05:45.177877   24015 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:05:45.181876   24015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:05:45.184866   24015 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:05:45.187835   24015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:05:45.191194   24015 config.go:182] Loaded profile config "cert-expiration-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:05:45.191266   24015 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:05:45.191325   24015 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:05:45.195927   24015 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:05:45.202845   24015 start.go:297] selected driver: qemu2
	I0729 05:05:45.202851   24015 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:05:45.202858   24015 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:05:45.205210   24015 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:05:45.208867   24015 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:05:45.211884   24015 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:05:45.211909   24015 cni.go:84] Creating CNI manager for "flannel"
	I0729 05:05:45.211914   24015 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0729 05:05:45.211959   24015 start.go:340] cluster config:
	{Name:flannel-394000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:05:45.215691   24015 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:05:45.222919   24015 out.go:177] * Starting "flannel-394000" primary control-plane node in "flannel-394000" cluster
	I0729 05:05:45.226897   24015 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:05:45.226914   24015 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:05:45.226929   24015 cache.go:56] Caching tarball of preloaded images
	I0729 05:05:45.227000   24015 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:05:45.227014   24015 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:05:45.227079   24015 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/flannel-394000/config.json ...
	I0729 05:05:45.227092   24015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/flannel-394000/config.json: {Name:mk715b7e14cd6062ec851e528a992d5a22ec703b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:05:45.227309   24015 start.go:360] acquireMachinesLock for flannel-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:05:45.227343   24015 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "flannel-394000"
	I0729 05:05:45.227355   24015 start.go:93] Provisioning new machine with config: &{Name:flannel-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:05:45.227379   24015 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:05:45.234877   24015 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:05:45.252672   24015 start.go:159] libmachine.API.Create for "flannel-394000" (driver="qemu2")
	I0729 05:05:45.252703   24015 client.go:168] LocalClient.Create starting
	I0729 05:05:45.252767   24015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:05:45.252797   24015 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:45.252807   24015 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:45.252854   24015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:05:45.252880   24015 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:45.252893   24015 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:45.253264   24015 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:05:45.405214   24015 main.go:141] libmachine: Creating SSH key...
	I0729 05:05:45.512695   24015 main.go:141] libmachine: Creating Disk image...
	I0729 05:05:45.512700   24015 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:05:45.512909   24015 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/disk.qcow2
	I0729 05:05:45.522323   24015 main.go:141] libmachine: STDOUT: 
	I0729 05:05:45.522340   24015 main.go:141] libmachine: STDERR: 
	I0729 05:05:45.522382   24015 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/disk.qcow2 +20000M
	I0729 05:05:45.530189   24015 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:05:45.530205   24015 main.go:141] libmachine: STDERR: 
	I0729 05:05:45.530218   24015 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/disk.qcow2
	I0729 05:05:45.530221   24015 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:05:45.530233   24015 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:05:45.530260   24015 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:9a:89:29:8f:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/disk.qcow2
	I0729 05:05:45.531863   24015 main.go:141] libmachine: STDOUT: 
	I0729 05:05:45.531878   24015 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:05:45.531896   24015 client.go:171] duration metric: took 279.194375ms to LocalClient.Create
	I0729 05:05:47.533936   24015 start.go:128] duration metric: took 2.306593041s to createHost
	I0729 05:05:47.533950   24015 start.go:83] releasing machines lock for "flannel-394000", held for 2.306645667s
	W0729 05:05:47.533968   24015 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:47.543477   24015 out.go:177] * Deleting "flannel-394000" in qemu2 ...
	W0729 05:05:47.553051   24015 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:47.553059   24015 start.go:729] Will try again in 5 seconds ...
	I0729 05:05:52.555327   24015 start.go:360] acquireMachinesLock for flannel-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:05:52.555793   24015 start.go:364] duration metric: took 365.041µs to acquireMachinesLock for "flannel-394000"
	I0729 05:05:52.555939   24015 start.go:93] Provisioning new machine with config: &{Name:flannel-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:05:52.556190   24015 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:05:52.561880   24015 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:05:52.612271   24015 start.go:159] libmachine.API.Create for "flannel-394000" (driver="qemu2")
	I0729 05:05:52.612323   24015 client.go:168] LocalClient.Create starting
	I0729 05:05:52.612438   24015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:05:52.612506   24015 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:52.612522   24015 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:52.612582   24015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:05:52.612625   24015 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:52.612640   24015 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:52.613824   24015 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:05:52.778220   24015 main.go:141] libmachine: Creating SSH key...
	I0729 05:05:52.899512   24015 main.go:141] libmachine: Creating Disk image...
	I0729 05:05:52.899517   24015 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:05:52.899773   24015 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/disk.qcow2
	I0729 05:05:52.909412   24015 main.go:141] libmachine: STDOUT: 
	I0729 05:05:52.909434   24015 main.go:141] libmachine: STDERR: 
	I0729 05:05:52.909490   24015 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/disk.qcow2 +20000M
	I0729 05:05:52.917274   24015 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:05:52.917290   24015 main.go:141] libmachine: STDERR: 
	I0729 05:05:52.917300   24015 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/disk.qcow2
	I0729 05:05:52.917303   24015 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:05:52.917320   24015 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:05:52.917351   24015 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:f5:f5:72:29:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/flannel-394000/disk.qcow2
	I0729 05:05:52.918933   24015 main.go:141] libmachine: STDOUT: 
	I0729 05:05:52.918951   24015 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:05:52.918964   24015 client.go:171] duration metric: took 306.641542ms to LocalClient.Create
	I0729 05:05:54.921099   24015 start.go:128] duration metric: took 2.36491525s to createHost
	I0729 05:05:54.921148   24015 start.go:83] releasing machines lock for "flannel-394000", held for 2.365370958s
	W0729 05:05:54.921520   24015 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:54.934300   24015 out.go:177] 
	W0729 05:05:54.938400   24015 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:05:54.938426   24015 out.go:239] * 
	* 
	W0729 05:05:54.941106   24015 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:05:54.949318   24015 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.789739167s)

                                                
                                                
-- stdout --
	* [enable-default-cni-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-394000" primary control-plane node in "enable-default-cni-394000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-394000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:05:57.343034   24133 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:05:57.343181   24133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:05:57.343184   24133 out.go:304] Setting ErrFile to fd 2...
	I0729 05:05:57.343186   24133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:05:57.343320   24133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:05:57.344456   24133 out.go:298] Setting JSON to false
	I0729 05:05:57.360563   24133 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11126,"bootTime":1722243631,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:05:57.360623   24133 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:05:57.365548   24133 out.go:177] * [enable-default-cni-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:05:57.373498   24133 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:05:57.373575   24133 notify.go:220] Checking for updates...
	I0729 05:05:57.381490   24133 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:05:57.384462   24133 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:05:57.388516   24133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:05:57.391516   24133 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:05:57.394484   24133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:05:57.397776   24133 config.go:182] Loaded profile config "cert-expiration-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:05:57.397851   24133 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:05:57.397896   24133 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:05:57.402536   24133 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:05:57.409476   24133 start.go:297] selected driver: qemu2
	I0729 05:05:57.409482   24133 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:05:57.409487   24133 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:05:57.411850   24133 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:05:57.416480   24133 out.go:177] * Automatically selected the socket_vmnet network
	E0729 05:05:57.419506   24133 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0729 05:05:57.419519   24133 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:05:57.419541   24133 cni.go:84] Creating CNI manager for "bridge"
	I0729 05:05:57.419553   24133 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 05:05:57.419579   24133 start.go:340] cluster config:
	{Name:enable-default-cni-394000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:05:57.423299   24133 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:05:57.431512   24133 out.go:177] * Starting "enable-default-cni-394000" primary control-plane node in "enable-default-cni-394000" cluster
	I0729 05:05:57.435466   24133 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:05:57.435480   24133 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:05:57.435492   24133 cache.go:56] Caching tarball of preloaded images
	I0729 05:05:57.435543   24133 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:05:57.435549   24133 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:05:57.435613   24133 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/enable-default-cni-394000/config.json ...
	I0729 05:05:57.435625   24133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/enable-default-cni-394000/config.json: {Name:mk3354d1b73009f157a2baac9e16dc66b737a65d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:05:57.435850   24133 start.go:360] acquireMachinesLock for enable-default-cni-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:05:57.435890   24133 start.go:364] duration metric: took 29.916µs to acquireMachinesLock for "enable-default-cni-394000"
	I0729 05:05:57.435902   24133 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:05:57.435929   24133 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:05:57.442499   24133 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:05:57.460678   24133 start.go:159] libmachine.API.Create for "enable-default-cni-394000" (driver="qemu2")
	I0729 05:05:57.460703   24133 client.go:168] LocalClient.Create starting
	I0729 05:05:57.460762   24133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:05:57.460804   24133 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:57.460823   24133 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:57.460863   24133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:05:57.460886   24133 main.go:141] libmachine: Decoding PEM data...
	I0729 05:05:57.460894   24133 main.go:141] libmachine: Parsing certificate...
	I0729 05:05:57.461331   24133 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:05:57.611636   24133 main.go:141] libmachine: Creating SSH key...
	I0729 05:05:57.684404   24133 main.go:141] libmachine: Creating Disk image...
	I0729 05:05:57.684409   24133 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:05:57.684634   24133 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/disk.qcow2
	I0729 05:05:57.693801   24133 main.go:141] libmachine: STDOUT: 
	I0729 05:05:57.693819   24133 main.go:141] libmachine: STDERR: 
	I0729 05:05:57.693860   24133 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/disk.qcow2 +20000M
	I0729 05:05:57.701642   24133 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:05:57.701666   24133 main.go:141] libmachine: STDERR: 
	I0729 05:05:57.701682   24133 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/disk.qcow2
	I0729 05:05:57.701689   24133 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:05:57.701698   24133 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:05:57.701724   24133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:63:a1:91:b1:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/disk.qcow2
	I0729 05:05:57.703389   24133 main.go:141] libmachine: STDOUT: 
	I0729 05:05:57.703408   24133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:05:57.703435   24133 client.go:171] duration metric: took 242.7325ms to LocalClient.Create
	I0729 05:05:59.705576   24133 start.go:128] duration metric: took 2.269668584s to createHost
	I0729 05:05:59.705643   24133 start.go:83] releasing machines lock for "enable-default-cni-394000", held for 2.269784542s
	W0729 05:05:59.705753   24133 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:59.720830   24133 out.go:177] * Deleting "enable-default-cni-394000" in qemu2 ...
	W0729 05:05:59.750118   24133 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:05:59.750155   24133 start.go:729] Will try again in 5 seconds ...
	I0729 05:06:04.752275   24133 start.go:360] acquireMachinesLock for enable-default-cni-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:06:04.752727   24133 start.go:364] duration metric: took 367.625µs to acquireMachinesLock for "enable-default-cni-394000"
	I0729 05:06:04.752887   24133 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:06:04.753180   24133 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:06:04.757917   24133 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:06:04.804919   24133 start.go:159] libmachine.API.Create for "enable-default-cni-394000" (driver="qemu2")
	I0729 05:06:04.804971   24133 client.go:168] LocalClient.Create starting
	I0729 05:06:04.805088   24133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:06:04.805158   24133 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:04.805174   24133 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:04.805244   24133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:06:04.805297   24133 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:04.805312   24133 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:04.805779   24133 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:06:04.975990   24133 main.go:141] libmachine: Creating SSH key...
	I0729 05:06:05.044283   24133 main.go:141] libmachine: Creating Disk image...
	I0729 05:06:05.044289   24133 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:06:05.044528   24133 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/disk.qcow2
	I0729 05:06:05.053894   24133 main.go:141] libmachine: STDOUT: 
	I0729 05:06:05.053917   24133 main.go:141] libmachine: STDERR: 
	I0729 05:06:05.053966   24133 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/disk.qcow2 +20000M
	I0729 05:06:05.061886   24133 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:06:05.061899   24133 main.go:141] libmachine: STDERR: 
	I0729 05:06:05.061912   24133 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/disk.qcow2
	I0729 05:06:05.061916   24133 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:06:05.061926   24133 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:06:05.061953   24133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:fb:6d:17:c8:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/enable-default-cni-394000/disk.qcow2
	I0729 05:06:05.063651   24133 main.go:141] libmachine: STDOUT: 
	I0729 05:06:05.063667   24133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:06:05.063680   24133 client.go:171] duration metric: took 258.707ms to LocalClient.Create
	I0729 05:06:07.065826   24133 start.go:128] duration metric: took 2.31265375s to createHost
	I0729 05:06:07.065876   24133 start.go:83] releasing machines lock for "enable-default-cni-394000", held for 2.313167625s
	W0729 05:06:07.066254   24133 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:07.074874   24133 out.go:177] 
	W0729 05:06:07.079986   24133 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:06:07.080128   24133 out.go:239] * 
	* 
	W0729 05:06:07.082870   24133 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:06:07.091075   24133 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.884602875s)

                                                
                                                
-- stdout --
	* [bridge-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-394000" primary control-plane node in "bridge-394000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-394000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:06:09.308160   24242 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:06:09.308273   24242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:06:09.308276   24242 out.go:304] Setting ErrFile to fd 2...
	I0729 05:06:09.308283   24242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:06:09.308396   24242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:06:09.309465   24242 out.go:298] Setting JSON to false
	I0729 05:06:09.325558   24242 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11138,"bootTime":1722243631,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:06:09.325624   24242 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:06:09.331386   24242 out.go:177] * [bridge-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:06:09.339270   24242 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:06:09.339316   24242 notify.go:220] Checking for updates...
	I0729 05:06:09.348252   24242 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:06:09.352257   24242 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:06:09.355226   24242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:06:09.358222   24242 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:06:09.361228   24242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:06:09.363026   24242 config.go:182] Loaded profile config "cert-expiration-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:06:09.363092   24242 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:06:09.363137   24242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:06:09.367256   24242 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:06:09.374074   24242 start.go:297] selected driver: qemu2
	I0729 05:06:09.374083   24242 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:06:09.374092   24242 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:06:09.376549   24242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:06:09.380251   24242 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:06:09.384309   24242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:06:09.384326   24242 cni.go:84] Creating CNI manager for "bridge"
	I0729 05:06:09.384329   24242 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 05:06:09.384356   24242 start.go:340] cluster config:
	{Name:bridge-394000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:06:09.388138   24242 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:06:09.396224   24242 out.go:177] * Starting "bridge-394000" primary control-plane node in "bridge-394000" cluster
	I0729 05:06:09.400267   24242 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:06:09.400285   24242 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:06:09.400293   24242 cache.go:56] Caching tarball of preloaded images
	I0729 05:06:09.400346   24242 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:06:09.400352   24242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:06:09.400416   24242 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/bridge-394000/config.json ...
	I0729 05:06:09.400427   24242 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/bridge-394000/config.json: {Name:mk11f89ac82399d2ebb8e307ccc906285f920c30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:06:09.400831   24242 start.go:360] acquireMachinesLock for bridge-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:06:09.400866   24242 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "bridge-394000"
	I0729 05:06:09.400878   24242 start.go:93] Provisioning new machine with config: &{Name:bridge-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:06:09.400910   24242 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:06:09.404280   24242 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:06:09.421947   24242 start.go:159] libmachine.API.Create for "bridge-394000" (driver="qemu2")
	I0729 05:06:09.421972   24242 client.go:168] LocalClient.Create starting
	I0729 05:06:09.422034   24242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:06:09.422064   24242 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:09.422074   24242 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:09.422113   24242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:06:09.422141   24242 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:09.422151   24242 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:09.422526   24242 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:06:09.574535   24242 main.go:141] libmachine: Creating SSH key...
	I0729 05:06:09.618652   24242 main.go:141] libmachine: Creating Disk image...
	I0729 05:06:09.618657   24242 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:06:09.618860   24242 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/disk.qcow2
	I0729 05:06:09.628208   24242 main.go:141] libmachine: STDOUT: 
	I0729 05:06:09.628229   24242 main.go:141] libmachine: STDERR: 
	I0729 05:06:09.628276   24242 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/disk.qcow2 +20000M
	I0729 05:06:09.636120   24242 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:06:09.636138   24242 main.go:141] libmachine: STDERR: 
	I0729 05:06:09.636161   24242 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/disk.qcow2
	I0729 05:06:09.636165   24242 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:06:09.636178   24242 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:06:09.636206   24242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:b5:31:c0:ce:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/disk.qcow2
	I0729 05:06:09.637850   24242 main.go:141] libmachine: STDOUT: 
	I0729 05:06:09.637865   24242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:06:09.637882   24242 client.go:171] duration metric: took 215.909875ms to LocalClient.Create
	I0729 05:06:11.640022   24242 start.go:128] duration metric: took 2.239131834s to createHost
	I0729 05:06:11.640120   24242 start.go:83] releasing machines lock for "bridge-394000", held for 2.239284959s
	W0729 05:06:11.640185   24242 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:11.650212   24242 out.go:177] * Deleting "bridge-394000" in qemu2 ...
	W0729 05:06:11.683535   24242 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:11.683563   24242 start.go:729] Will try again in 5 seconds ...
	I0729 05:06:16.685245   24242 start.go:360] acquireMachinesLock for bridge-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:06:16.685768   24242 start.go:364] duration metric: took 419.041µs to acquireMachinesLock for "bridge-394000"
	I0729 05:06:16.685909   24242 start.go:93] Provisioning new machine with config: &{Name:bridge-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:06:16.686248   24242 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:06:16.700938   24242 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:06:16.753473   24242 start.go:159] libmachine.API.Create for "bridge-394000" (driver="qemu2")
	I0729 05:06:16.753519   24242 client.go:168] LocalClient.Create starting
	I0729 05:06:16.753668   24242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:06:16.753739   24242 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:16.753754   24242 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:16.753814   24242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:06:16.753859   24242 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:16.753873   24242 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:16.754535   24242 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:06:16.913851   24242 main.go:141] libmachine: Creating SSH key...
	I0729 05:06:17.100377   24242 main.go:141] libmachine: Creating Disk image...
	I0729 05:06:17.100383   24242 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:06:17.100605   24242 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/disk.qcow2
	I0729 05:06:17.110226   24242 main.go:141] libmachine: STDOUT: 
	I0729 05:06:17.110246   24242 main.go:141] libmachine: STDERR: 
	I0729 05:06:17.110305   24242 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/disk.qcow2 +20000M
	I0729 05:06:17.118287   24242 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:06:17.118305   24242 main.go:141] libmachine: STDERR: 
	I0729 05:06:17.118321   24242 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/disk.qcow2
	I0729 05:06:17.118326   24242 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:06:17.118336   24242 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:06:17.118364   24242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:bf:07:f5:21:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/bridge-394000/disk.qcow2
	I0729 05:06:17.119994   24242 main.go:141] libmachine: STDOUT: 
	I0729 05:06:17.120014   24242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:06:17.120028   24242 client.go:171] duration metric: took 366.50925ms to LocalClient.Create
	I0729 05:06:19.122153   24242 start.go:128] duration metric: took 2.435922917s to createHost
	I0729 05:06:19.122211   24242 start.go:83] releasing machines lock for "bridge-394000", held for 2.436465459s
	W0729 05:06:19.122590   24242 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:19.136414   24242 out.go:177] 
	W0729 05:06:19.140511   24242 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:06:19.140559   24242 out.go:239] * 
	* 
	W0729 05:06:19.143162   24242 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:06:19.150349   24242 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.907541792s)

                                                
                                                
-- stdout --
	* [kubenet-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-394000" primary control-plane node in "kubenet-394000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-394000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:06:21.306752   24351 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:06:21.306881   24351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:06:21.306885   24351 out.go:304] Setting ErrFile to fd 2...
	I0729 05:06:21.306887   24351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:06:21.307015   24351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:06:21.308036   24351 out.go:298] Setting JSON to false
	I0729 05:06:21.324229   24351 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11150,"bootTime":1722243631,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:06:21.324297   24351 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:06:21.330967   24351 out.go:177] * [kubenet-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:06:21.337904   24351 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:06:21.337997   24351 notify.go:220] Checking for updates...
	I0729 05:06:21.345848   24351 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:06:21.349878   24351 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:06:21.353724   24351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:06:21.356884   24351 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:06:21.359884   24351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:06:21.363216   24351 config.go:182] Loaded profile config "cert-expiration-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:06:21.363297   24351 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:06:21.363352   24351 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:06:21.367874   24351 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:06:21.374898   24351 start.go:297] selected driver: qemu2
	I0729 05:06:21.374906   24351 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:06:21.374915   24351 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:06:21.377273   24351 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:06:21.379842   24351 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:06:21.388055   24351 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:06:21.388082   24351 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0729 05:06:21.388123   24351 start.go:340] cluster config:
	{Name:kubenet-394000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:06:21.392029   24351 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:06:21.399874   24351 out.go:177] * Starting "kubenet-394000" primary control-plane node in "kubenet-394000" cluster
	I0729 05:06:21.403896   24351 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:06:21.403911   24351 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:06:21.403920   24351 cache.go:56] Caching tarball of preloaded images
	I0729 05:06:21.403975   24351 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:06:21.403980   24351 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:06:21.404041   24351 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/kubenet-394000/config.json ...
	I0729 05:06:21.404052   24351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/kubenet-394000/config.json: {Name:mkcb854c44d038ce1c2881c23cdfd9dc9030021b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:06:21.404447   24351 start.go:360] acquireMachinesLock for kubenet-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:06:21.404480   24351 start.go:364] duration metric: took 27.584µs to acquireMachinesLock for "kubenet-394000"
	I0729 05:06:21.404491   24351 start.go:93] Provisioning new machine with config: &{Name:kubenet-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:06:21.404549   24351 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:06:21.408883   24351 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:06:21.426243   24351 start.go:159] libmachine.API.Create for "kubenet-394000" (driver="qemu2")
	I0729 05:06:21.426268   24351 client.go:168] LocalClient.Create starting
	I0729 05:06:21.426322   24351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:06:21.426350   24351 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:21.426358   24351 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:21.426395   24351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:06:21.426417   24351 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:21.426426   24351 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:21.426916   24351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:06:21.576116   24351 main.go:141] libmachine: Creating SSH key...
	I0729 05:06:21.635921   24351 main.go:141] libmachine: Creating Disk image...
	I0729 05:06:21.635926   24351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:06:21.636128   24351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/disk.qcow2
	I0729 05:06:21.645355   24351 main.go:141] libmachine: STDOUT: 
	I0729 05:06:21.645371   24351 main.go:141] libmachine: STDERR: 
	I0729 05:06:21.645418   24351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/disk.qcow2 +20000M
	I0729 05:06:21.653229   24351 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:06:21.653242   24351 main.go:141] libmachine: STDERR: 
	I0729 05:06:21.653254   24351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/disk.qcow2
	I0729 05:06:21.653258   24351 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:06:21.653271   24351 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:06:21.653296   24351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:6b:af:a6:b6:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/disk.qcow2
	I0729 05:06:21.654916   24351 main.go:141] libmachine: STDOUT: 
	I0729 05:06:21.654935   24351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:06:21.654955   24351 client.go:171] duration metric: took 228.686333ms to LocalClient.Create
	I0729 05:06:23.657094   24351 start.go:128] duration metric: took 2.252565708s to createHost
	I0729 05:06:23.657151   24351 start.go:83] releasing machines lock for "kubenet-394000", held for 2.252703125s
	W0729 05:06:23.657207   24351 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:23.674238   24351 out.go:177] * Deleting "kubenet-394000" in qemu2 ...
	W0729 05:06:23.700358   24351 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:23.700389   24351 start.go:729] Will try again in 5 seconds ...
	I0729 05:06:28.702502   24351 start.go:360] acquireMachinesLock for kubenet-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:06:28.702965   24351 start.go:364] duration metric: took 355.833µs to acquireMachinesLock for "kubenet-394000"
	I0729 05:06:28.703093   24351 start.go:93] Provisioning new machine with config: &{Name:kubenet-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:06:28.703398   24351 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:06:28.714041   24351 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:06:28.762768   24351 start.go:159] libmachine.API.Create for "kubenet-394000" (driver="qemu2")
	I0729 05:06:28.762820   24351 client.go:168] LocalClient.Create starting
	I0729 05:06:28.762934   24351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:06:28.762999   24351 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:28.763014   24351 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:28.763081   24351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:06:28.763125   24351 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:28.763142   24351 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:28.763810   24351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:06:28.925093   24351 main.go:141] libmachine: Creating SSH key...
	I0729 05:06:29.125202   24351 main.go:141] libmachine: Creating Disk image...
	I0729 05:06:29.125211   24351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:06:29.125461   24351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/disk.qcow2
	I0729 05:06:29.135087   24351 main.go:141] libmachine: STDOUT: 
	I0729 05:06:29.135110   24351 main.go:141] libmachine: STDERR: 
	I0729 05:06:29.135164   24351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/disk.qcow2 +20000M
	I0729 05:06:29.143096   24351 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:06:29.143114   24351 main.go:141] libmachine: STDERR: 
	I0729 05:06:29.143123   24351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/disk.qcow2
	I0729 05:06:29.143130   24351 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:06:29.143139   24351 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:06:29.143166   24351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:6c:24:a4:dc:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/kubenet-394000/disk.qcow2
	I0729 05:06:29.144836   24351 main.go:141] libmachine: STDOUT: 
	I0729 05:06:29.144851   24351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:06:29.144864   24351 client.go:171] duration metric: took 382.045084ms to LocalClient.Create
	I0729 05:06:31.147009   24351 start.go:128] duration metric: took 2.4436145s to createHost
	I0729 05:06:31.147061   24351 start.go:83] releasing machines lock for "kubenet-394000", held for 2.444116833s
	W0729 05:06:31.147598   24351 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:31.159169   24351 out.go:177] 
	W0729 05:06:31.164023   24351 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:06:31.164044   24351 out.go:239] * 
	* 
	W0729 05:06:31.166023   24351 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:06:31.173909   24351 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.813014958s)

                                                
                                                
-- stdout --
	* [custom-flannel-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-394000" primary control-plane node in "custom-flannel-394000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-394000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:06:33.311102   24465 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:06:33.311237   24465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:06:33.311240   24465 out.go:304] Setting ErrFile to fd 2...
	I0729 05:06:33.311248   24465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:06:33.311390   24465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:06:33.312494   24465 out.go:298] Setting JSON to false
	I0729 05:06:33.328716   24465 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11162,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:06:33.328782   24465 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:06:33.335945   24465 out.go:177] * [custom-flannel-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:06:33.342797   24465 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:06:33.342862   24465 notify.go:220] Checking for updates...
	I0729 05:06:33.348788   24465 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:06:33.352808   24465 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:06:33.355714   24465 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:06:33.362788   24465 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:06:33.364332   24465 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:06:33.368020   24465 config.go:182] Loaded profile config "cert-expiration-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:06:33.368090   24465 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:06:33.368141   24465 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:06:33.372793   24465 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:06:33.377787   24465 start.go:297] selected driver: qemu2
	I0729 05:06:33.377795   24465 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:06:33.377802   24465 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:06:33.380286   24465 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:06:33.384714   24465 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:06:33.386051   24465 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:06:33.386080   24465 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0729 05:06:33.386088   24465 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0729 05:06:33.386120   24465 start.go:340] cluster config:
	{Name:custom-flannel-394000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:06:33.389920   24465 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:06:33.397842   24465 out.go:177] * Starting "custom-flannel-394000" primary control-plane node in "custom-flannel-394000" cluster
	I0729 05:06:33.401746   24465 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:06:33.401763   24465 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:06:33.401773   24465 cache.go:56] Caching tarball of preloaded images
	I0729 05:06:33.401832   24465 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:06:33.401838   24465 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:06:33.401910   24465 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/custom-flannel-394000/config.json ...
	I0729 05:06:33.401922   24465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/custom-flannel-394000/config.json: {Name:mkc9771a9d74954d5ac02b00b945c8fbe677f678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:06:33.402312   24465 start.go:360] acquireMachinesLock for custom-flannel-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:06:33.402349   24465 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "custom-flannel-394000"
	I0729 05:06:33.402361   24465 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:06:33.402391   24465 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:06:33.409801   24465 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:06:33.428214   24465 start.go:159] libmachine.API.Create for "custom-flannel-394000" (driver="qemu2")
	I0729 05:06:33.428238   24465 client.go:168] LocalClient.Create starting
	I0729 05:06:33.428295   24465 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:06:33.428350   24465 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:33.428364   24465 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:33.428392   24465 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:06:33.428415   24465 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:33.428423   24465 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:33.428804   24465 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:06:33.582761   24465 main.go:141] libmachine: Creating SSH key...
	I0729 05:06:33.669301   24465 main.go:141] libmachine: Creating Disk image...
	I0729 05:06:33.669306   24465 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:06:33.669511   24465 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/disk.qcow2
	I0729 05:06:33.678774   24465 main.go:141] libmachine: STDOUT: 
	I0729 05:06:33.678789   24465 main.go:141] libmachine: STDERR: 
	I0729 05:06:33.678870   24465 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/disk.qcow2 +20000M
	I0729 05:06:33.686692   24465 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:06:33.686706   24465 main.go:141] libmachine: STDERR: 
	I0729 05:06:33.686720   24465 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/disk.qcow2
	I0729 05:06:33.686725   24465 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:06:33.686736   24465 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:06:33.686762   24465 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:c6:68:6a:f7:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/disk.qcow2
	I0729 05:06:33.688358   24465 main.go:141] libmachine: STDOUT: 
	I0729 05:06:33.688372   24465 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:06:33.688390   24465 client.go:171] duration metric: took 260.1515ms to LocalClient.Create
	I0729 05:06:35.690535   24465 start.go:128] duration metric: took 2.288165042s to createHost
	I0729 05:06:35.690664   24465 start.go:83] releasing machines lock for "custom-flannel-394000", held for 2.288283042s
	W0729 05:06:35.690736   24465 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:35.701830   24465 out.go:177] * Deleting "custom-flannel-394000" in qemu2 ...
	W0729 05:06:35.733318   24465 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:35.733346   24465 start.go:729] Will try again in 5 seconds ...
	I0729 05:06:40.735529   24465 start.go:360] acquireMachinesLock for custom-flannel-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:06:40.735980   24465 start.go:364] duration metric: took 355.208µs to acquireMachinesLock for "custom-flannel-394000"
	I0729 05:06:40.736154   24465 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:06:40.736513   24465 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:06:40.752253   24465 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:06:40.802209   24465 start.go:159] libmachine.API.Create for "custom-flannel-394000" (driver="qemu2")
	I0729 05:06:40.802255   24465 client.go:168] LocalClient.Create starting
	I0729 05:06:40.802372   24465 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:06:40.802435   24465 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:40.802453   24465 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:40.802522   24465 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:06:40.802568   24465 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:40.802583   24465 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:40.803339   24465 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:06:40.962619   24465 main.go:141] libmachine: Creating SSH key...
	I0729 05:06:41.031445   24465 main.go:141] libmachine: Creating Disk image...
	I0729 05:06:41.031451   24465 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:06:41.031671   24465 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/disk.qcow2
	I0729 05:06:41.040755   24465 main.go:141] libmachine: STDOUT: 
	I0729 05:06:41.040783   24465 main.go:141] libmachine: STDERR: 
	I0729 05:06:41.040837   24465 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/disk.qcow2 +20000M
	I0729 05:06:41.048625   24465 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:06:41.048640   24465 main.go:141] libmachine: STDERR: 
	I0729 05:06:41.048653   24465 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/disk.qcow2
	I0729 05:06:41.048656   24465 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:06:41.048666   24465 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:06:41.048703   24465 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:95:5a:b0:01:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/custom-flannel-394000/disk.qcow2
	I0729 05:06:41.050343   24465 main.go:141] libmachine: STDOUT: 
	I0729 05:06:41.050359   24465 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:06:41.050371   24465 client.go:171] duration metric: took 248.115334ms to LocalClient.Create
	I0729 05:06:43.052513   24465 start.go:128] duration metric: took 2.316016334s to createHost
	I0729 05:06:43.052567   24465 start.go:83] releasing machines lock for "custom-flannel-394000", held for 2.316601125s
	W0729 05:06:43.052888   24465 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:43.066560   24465 out.go:177] 
	W0729 05:06:43.069546   24465 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:06:43.069597   24465 out.go:239] * 
	* 
	W0729 05:06:43.072179   24465 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:06:43.081600   24465 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.769621958s)

                                                
                                                
-- stdout --
	* [calico-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-394000" primary control-plane node in "calico-394000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-394000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:06:45.419904   24582 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:06:45.420040   24582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:06:45.420044   24582 out.go:304] Setting ErrFile to fd 2...
	I0729 05:06:45.420046   24582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:06:45.420174   24582 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:06:45.421277   24582 out.go:298] Setting JSON to false
	I0729 05:06:45.437630   24582 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11174,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:06:45.437689   24582 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:06:45.444863   24582 out.go:177] * [calico-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:06:45.451687   24582 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:06:45.451721   24582 notify.go:220] Checking for updates...
	I0729 05:06:45.459836   24582 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:06:45.463819   24582 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:06:45.467814   24582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:06:45.470900   24582 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:06:45.473803   24582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:06:45.477140   24582 config.go:182] Loaded profile config "cert-expiration-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:06:45.477227   24582 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:06:45.477276   24582 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:06:45.481852   24582 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:06:45.488807   24582 start.go:297] selected driver: qemu2
	I0729 05:06:45.488817   24582 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:06:45.488824   24582 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:06:45.491158   24582 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:06:45.495831   24582 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:06:45.498917   24582 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:06:45.498939   24582 cni.go:84] Creating CNI manager for "calico"
	I0729 05:06:45.498951   24582 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0729 05:06:45.498982   24582 start.go:340] cluster config:
	{Name:calico-394000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:06:45.502687   24582 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:06:45.511711   24582 out.go:177] * Starting "calico-394000" primary control-plane node in "calico-394000" cluster
	I0729 05:06:45.515817   24582 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:06:45.515832   24582 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:06:45.515841   24582 cache.go:56] Caching tarball of preloaded images
	I0729 05:06:45.515899   24582 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:06:45.515905   24582 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:06:45.515967   24582 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/calico-394000/config.json ...
	I0729 05:06:45.515980   24582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/calico-394000/config.json: {Name:mkcb4bbdea67db5cfe7c6c3a93873665e60e9f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:06:45.516206   24582 start.go:360] acquireMachinesLock for calico-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:06:45.516243   24582 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "calico-394000"
	I0729 05:06:45.516255   24582 start.go:93] Provisioning new machine with config: &{Name:calico-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:06:45.516282   24582 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:06:45.522828   24582 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:06:45.541172   24582 start.go:159] libmachine.API.Create for "calico-394000" (driver="qemu2")
	I0729 05:06:45.541200   24582 client.go:168] LocalClient.Create starting
	I0729 05:06:45.541269   24582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:06:45.541300   24582 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:45.541317   24582 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:45.541354   24582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:06:45.541378   24582 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:45.541387   24582 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:45.541754   24582 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:06:45.692011   24582 main.go:141] libmachine: Creating SSH key...
	I0729 05:06:45.733745   24582 main.go:141] libmachine: Creating Disk image...
	I0729 05:06:45.733751   24582 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:06:45.733965   24582 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/disk.qcow2
	I0729 05:06:45.743102   24582 main.go:141] libmachine: STDOUT: 
	I0729 05:06:45.743120   24582 main.go:141] libmachine: STDERR: 
	I0729 05:06:45.743167   24582 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/disk.qcow2 +20000M
	I0729 05:06:45.750993   24582 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:06:45.751008   24582 main.go:141] libmachine: STDERR: 
	I0729 05:06:45.751022   24582 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/disk.qcow2
	I0729 05:06:45.751027   24582 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:06:45.751040   24582 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:06:45.751067   24582 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:51:ed:49:4a:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/disk.qcow2
	I0729 05:06:45.752652   24582 main.go:141] libmachine: STDOUT: 
	I0729 05:06:45.752667   24582 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:06:45.752687   24582 client.go:171] duration metric: took 211.486292ms to LocalClient.Create
	I0729 05:06:47.754832   24582 start.go:128] duration metric: took 2.238569125s to createHost
	I0729 05:06:47.754908   24582 start.go:83] releasing machines lock for "calico-394000", held for 2.23869675s
	W0729 05:06:47.754961   24582 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:47.766265   24582 out.go:177] * Deleting "calico-394000" in qemu2 ...
	W0729 05:06:47.796116   24582 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:47.796134   24582 start.go:729] Will try again in 5 seconds ...
	I0729 05:06:52.798217   24582 start.go:360] acquireMachinesLock for calico-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:06:52.798753   24582 start.go:364] duration metric: took 421.834µs to acquireMachinesLock for "calico-394000"
	I0729 05:06:52.798863   24582 start.go:93] Provisioning new machine with config: &{Name:calico-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:06:52.799185   24582 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:06:52.804836   24582 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:06:52.854258   24582 start.go:159] libmachine.API.Create for "calico-394000" (driver="qemu2")
	I0729 05:06:52.854308   24582 client.go:168] LocalClient.Create starting
	I0729 05:06:52.854418   24582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:06:52.854476   24582 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:52.854494   24582 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:52.854552   24582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:06:52.854601   24582 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:52.854619   24582 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:52.855114   24582 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:06:53.017281   24582 main.go:141] libmachine: Creating SSH key...
	I0729 05:06:53.096909   24582 main.go:141] libmachine: Creating Disk image...
	I0729 05:06:53.096914   24582 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:06:53.097123   24582 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/disk.qcow2
	I0729 05:06:53.106508   24582 main.go:141] libmachine: STDOUT: 
	I0729 05:06:53.106536   24582 main.go:141] libmachine: STDERR: 
	I0729 05:06:53.106596   24582 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/disk.qcow2 +20000M
	I0729 05:06:53.114342   24582 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:06:53.114365   24582 main.go:141] libmachine: STDERR: 
	I0729 05:06:53.114376   24582 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/disk.qcow2
	I0729 05:06:53.114381   24582 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:06:53.114390   24582 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:06:53.114416   24582 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:5c:7f:8a:7a:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/calico-394000/disk.qcow2
	I0729 05:06:53.116032   24582 main.go:141] libmachine: STDOUT: 
	I0729 05:06:53.116049   24582 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:06:53.116062   24582 client.go:171] duration metric: took 261.752667ms to LocalClient.Create
	I0729 05:06:55.118228   24582 start.go:128] duration metric: took 2.319027791s to createHost
	I0729 05:06:55.118285   24582 start.go:83] releasing machines lock for "calico-394000", held for 2.319547125s
	W0729 05:06:55.118608   24582 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:55.133265   24582 out.go:177] 
	W0729 05:06:55.137272   24582 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:06:55.137295   24582 out.go:239] * 
	* 
	W0729 05:06:55.139673   24582 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:06:55.148214   24582 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-394000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.81078975s)

                                                
                                                
-- stdout --
	* [false-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-394000" primary control-plane node in "false-394000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-394000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:06:57.570410   24699 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:06:57.570544   24699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:06:57.570548   24699 out.go:304] Setting ErrFile to fd 2...
	I0729 05:06:57.570550   24699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:06:57.570680   24699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:06:57.571800   24699 out.go:298] Setting JSON to false
	I0729 05:06:57.587982   24699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11186,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:06:57.588042   24699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:06:57.593735   24699 out.go:177] * [false-394000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:06:57.601700   24699 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:06:57.601758   24699 notify.go:220] Checking for updates...
	I0729 05:06:57.610624   24699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:06:57.613548   24699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:06:57.616620   24699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:06:57.619670   24699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:06:57.621072   24699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:06:57.624890   24699 config.go:182] Loaded profile config "cert-expiration-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:06:57.624972   24699 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:06:57.625023   24699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:06:57.628665   24699 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:06:57.633631   24699 start.go:297] selected driver: qemu2
	I0729 05:06:57.633637   24699 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:06:57.633650   24699 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:06:57.635884   24699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:06:57.638681   24699 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:06:57.641739   24699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:06:57.641754   24699 cni.go:84] Creating CNI manager for "false"
	I0729 05:06:57.641788   24699 start.go:340] cluster config:
	{Name:false-394000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:06:57.645466   24699 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:06:57.653659   24699 out.go:177] * Starting "false-394000" primary control-plane node in "false-394000" cluster
	I0729 05:06:57.657580   24699 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:06:57.657598   24699 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:06:57.657609   24699 cache.go:56] Caching tarball of preloaded images
	I0729 05:06:57.657686   24699 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:06:57.657692   24699 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:06:57.657752   24699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/false-394000/config.json ...
	I0729 05:06:57.657764   24699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/false-394000/config.json: {Name:mka43f600676d4697b3057402f0b35a75da0ee20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:06:57.657989   24699 start.go:360] acquireMachinesLock for false-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:06:57.658025   24699 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "false-394000"
	I0729 05:06:57.658038   24699 start.go:93] Provisioning new machine with config: &{Name:false-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:06:57.658073   24699 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:06:57.666633   24699 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:06:57.684785   24699 start.go:159] libmachine.API.Create for "false-394000" (driver="qemu2")
	I0729 05:06:57.684815   24699 client.go:168] LocalClient.Create starting
	I0729 05:06:57.684891   24699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:06:57.684923   24699 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:57.684932   24699 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:57.684975   24699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:06:57.685000   24699 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:57.685006   24699 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:57.685451   24699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:06:57.836258   24699 main.go:141] libmachine: Creating SSH key...
	I0729 05:06:57.888585   24699 main.go:141] libmachine: Creating Disk image...
	I0729 05:06:57.888591   24699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:06:57.888796   24699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/disk.qcow2
	I0729 05:06:57.898008   24699 main.go:141] libmachine: STDOUT: 
	I0729 05:06:57.898022   24699 main.go:141] libmachine: STDERR: 
	I0729 05:06:57.898069   24699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/disk.qcow2 +20000M
	I0729 05:06:57.905909   24699 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:06:57.905926   24699 main.go:141] libmachine: STDERR: 
	I0729 05:06:57.905942   24699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/disk.qcow2
	I0729 05:06:57.905947   24699 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:06:57.905961   24699 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:06:57.905988   24699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:65:3a:32:0a:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/disk.qcow2
	I0729 05:06:57.907659   24699 main.go:141] libmachine: STDOUT: 
	I0729 05:06:57.907673   24699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:06:57.907691   24699 client.go:171] duration metric: took 222.874291ms to LocalClient.Create
	I0729 05:06:59.909826   24699 start.go:128] duration metric: took 2.251766833s to createHost
	I0729 05:06:59.909887   24699 start.go:83] releasing machines lock for "false-394000", held for 2.251893416s
	W0729 05:06:59.909949   24699 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:59.921217   24699 out.go:177] * Deleting "false-394000" in qemu2 ...
	W0729 05:06:59.951180   24699 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:06:59.951200   24699 start.go:729] Will try again in 5 seconds ...
	I0729 05:07:04.953259   24699 start.go:360] acquireMachinesLock for false-394000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:07:04.953739   24699 start.go:364] duration metric: took 401.791µs to acquireMachinesLock for "false-394000"
	I0729 05:07:04.953854   24699 start.go:93] Provisioning new machine with config: &{Name:false-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:07:04.954170   24699 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:07:04.970836   24699 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 05:07:05.020279   24699 start.go:159] libmachine.API.Create for "false-394000" (driver="qemu2")
	I0729 05:07:05.020332   24699 client.go:168] LocalClient.Create starting
	I0729 05:07:05.020462   24699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:07:05.020541   24699 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:05.020560   24699 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:05.020625   24699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:07:05.020673   24699 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:05.020686   24699 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:05.021215   24699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:07:05.182557   24699 main.go:141] libmachine: Creating SSH key...
	I0729 05:07:05.281579   24699 main.go:141] libmachine: Creating Disk image...
	I0729 05:07:05.281585   24699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:07:05.281790   24699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/disk.qcow2
	I0729 05:07:05.291024   24699 main.go:141] libmachine: STDOUT: 
	I0729 05:07:05.291044   24699 main.go:141] libmachine: STDERR: 
	I0729 05:07:05.291106   24699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/disk.qcow2 +20000M
	I0729 05:07:05.299084   24699 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:07:05.299099   24699 main.go:141] libmachine: STDERR: 
	I0729 05:07:05.299116   24699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/disk.qcow2
	I0729 05:07:05.299120   24699 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:07:05.299128   24699 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:07:05.299154   24699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:f1:7b:9f:25:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/false-394000/disk.qcow2
	I0729 05:07:05.300801   24699 main.go:141] libmachine: STDOUT: 
	I0729 05:07:05.300817   24699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:07:05.300831   24699 client.go:171] duration metric: took 280.498417ms to LocalClient.Create
	I0729 05:07:07.302970   24699 start.go:128] duration metric: took 2.348815541s to createHost
	I0729 05:07:07.303031   24699 start.go:83] releasing machines lock for "false-394000", held for 2.349308333s
	W0729 05:07:07.303389   24699 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:07.314100   24699 out.go:177] 
	W0729 05:07:07.325246   24699 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:07:07.325275   24699 out.go:239] * 
	* 
	W0729 05:07:07.327791   24699 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:07:07.338130   24699 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-051000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-051000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.862301333s)

                                                
                                                
-- stdout --
	* [old-k8s-version-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-051000" primary control-plane node in "old-k8s-version-051000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-051000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:07:09.545989   24808 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:07:09.546119   24808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:09.546122   24808 out.go:304] Setting ErrFile to fd 2...
	I0729 05:07:09.546125   24808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:09.546272   24808 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:07:09.547327   24808 out.go:298] Setting JSON to false
	I0729 05:07:09.563307   24808 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11198,"bootTime":1722243631,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:07:09.563374   24808 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:07:09.568754   24808 out.go:177] * [old-k8s-version-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:07:09.576984   24808 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:07:09.577045   24808 notify.go:220] Checking for updates...
	I0729 05:07:09.584873   24808 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:07:09.587946   24808 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:07:09.591926   24808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:07:09.594911   24808 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:07:09.597948   24808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:07:09.601208   24808 config.go:182] Loaded profile config "cert-expiration-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:07:09.601282   24808 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:07:09.601332   24808 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:07:09.605864   24808 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:07:09.612869   24808 start.go:297] selected driver: qemu2
	I0729 05:07:09.612874   24808 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:07:09.612880   24808 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:07:09.615184   24808 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:07:09.617916   24808 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:07:09.622018   24808 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:07:09.622053   24808 cni.go:84] Creating CNI manager for ""
	I0729 05:07:09.622060   24808 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 05:07:09.622087   24808 start.go:340] cluster config:
	{Name:old-k8s-version-051000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:07:09.625805   24808 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:09.634920   24808 out.go:177] * Starting "old-k8s-version-051000" primary control-plane node in "old-k8s-version-051000" cluster
	I0729 05:07:09.637876   24808 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 05:07:09.637891   24808 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 05:07:09.637915   24808 cache.go:56] Caching tarball of preloaded images
	I0729 05:07:09.637979   24808 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:07:09.637985   24808 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 05:07:09.638055   24808 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/old-k8s-version-051000/config.json ...
	I0729 05:07:09.638067   24808 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/old-k8s-version-051000/config.json: {Name:mk6ff7b2af1f66a741efda47f155131222fe62b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:07:09.638295   24808 start.go:360] acquireMachinesLock for old-k8s-version-051000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:07:09.638334   24808 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "old-k8s-version-051000"
	I0729 05:07:09.638347   24808 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:07:09.638377   24808 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:07:09.642977   24808 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 05:07:09.660822   24808 start.go:159] libmachine.API.Create for "old-k8s-version-051000" (driver="qemu2")
	I0729 05:07:09.660851   24808 client.go:168] LocalClient.Create starting
	I0729 05:07:09.660916   24808 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:07:09.660947   24808 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:09.660958   24808 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:09.660997   24808 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:07:09.661021   24808 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:09.661029   24808 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:09.661410   24808 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:07:09.814018   24808 main.go:141] libmachine: Creating SSH key...
	I0729 05:07:09.883753   24808 main.go:141] libmachine: Creating Disk image...
	I0729 05:07:09.883769   24808 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:07:09.884002   24808 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2
	I0729 05:07:09.893315   24808 main.go:141] libmachine: STDOUT: 
	I0729 05:07:09.893330   24808 main.go:141] libmachine: STDERR: 
	I0729 05:07:09.893369   24808 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2 +20000M
	I0729 05:07:09.901221   24808 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:07:09.901245   24808 main.go:141] libmachine: STDERR: 
	I0729 05:07:09.901266   24808 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2
	I0729 05:07:09.901271   24808 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:07:09.901284   24808 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:07:09.901311   24808 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:3e:42:d9:36:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2
	I0729 05:07:09.902944   24808 main.go:141] libmachine: STDOUT: 
	I0729 05:07:09.902957   24808 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:07:09.902977   24808 client.go:171] duration metric: took 242.125167ms to LocalClient.Create
	I0729 05:07:11.905107   24808 start.go:128] duration metric: took 2.266751458s to createHost
	I0729 05:07:11.905151   24808 start.go:83] releasing machines lock for "old-k8s-version-051000", held for 2.266849625s
	W0729 05:07:11.905208   24808 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:11.916308   24808 out.go:177] * Deleting "old-k8s-version-051000" in qemu2 ...
	W0729 05:07:11.944334   24808 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:11.944361   24808 start.go:729] Will try again in 5 seconds ...
	I0729 05:07:16.946465   24808 start.go:360] acquireMachinesLock for old-k8s-version-051000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:07:16.946851   24808 start.go:364] duration metric: took 316.334µs to acquireMachinesLock for "old-k8s-version-051000"
	I0729 05:07:16.946975   24808 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:07:16.947326   24808 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:07:16.962932   24808 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 05:07:17.013461   24808 start.go:159] libmachine.API.Create for "old-k8s-version-051000" (driver="qemu2")
	I0729 05:07:17.013506   24808 client.go:168] LocalClient.Create starting
	I0729 05:07:17.013625   24808 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:07:17.013693   24808 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:17.013708   24808 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:17.013763   24808 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:07:17.013810   24808 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:17.013824   24808 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:17.014291   24808 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:07:17.175009   24808 main.go:141] libmachine: Creating SSH key...
	I0729 05:07:17.314579   24808 main.go:141] libmachine: Creating Disk image...
	I0729 05:07:17.314586   24808 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:07:17.314816   24808 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2
	I0729 05:07:17.324411   24808 main.go:141] libmachine: STDOUT: 
	I0729 05:07:17.324427   24808 main.go:141] libmachine: STDERR: 
	I0729 05:07:17.324480   24808 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2 +20000M
	I0729 05:07:17.332461   24808 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:07:17.332476   24808 main.go:141] libmachine: STDERR: 
	I0729 05:07:17.332487   24808 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2
	I0729 05:07:17.332494   24808 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:07:17.332504   24808 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:07:17.332541   24808 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:5b:81:2d:9d:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2
	I0729 05:07:17.334159   24808 main.go:141] libmachine: STDOUT: 
	I0729 05:07:17.334174   24808 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:07:17.334186   24808 client.go:171] duration metric: took 320.680417ms to LocalClient.Create
	I0729 05:07:19.336325   24808 start.go:128] duration metric: took 2.389019583s to createHost
	I0729 05:07:19.336373   24808 start.go:83] releasing machines lock for "old-k8s-version-051000", held for 2.38954525s
	W0729 05:07:19.336756   24808 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-051000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-051000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:19.350358   24808 out.go:177] 
	W0729 05:07:19.353372   24808 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:07:19.353422   24808 out.go:239] * 
	* 
	W0729 05:07:19.355846   24808 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:07:19.367379   24808 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-051000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000: exit status 7 (71.318208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-051000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-051000 create -f testdata/busybox.yaml: exit status 1 (30.205333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-051000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-051000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000: exit status 7 (29.911333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-051000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000: exit status 7 (29.536542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-051000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-051000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-051000 describe deploy/metrics-server -n kube-system: exit status 1 (26.888167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-051000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-051000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000: exit status 7 (29.589375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-051000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-051000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.189972792s)

                                                
                                                
-- stdout --
	* [old-k8s-version-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-051000" primary control-plane node in "old-k8s-version-051000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:07:23.456303   24858 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:07:23.456442   24858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:23.456445   24858 out.go:304] Setting ErrFile to fd 2...
	I0729 05:07:23.456448   24858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:23.456565   24858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:07:23.457595   24858 out.go:298] Setting JSON to false
	I0729 05:07:23.473523   24858 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11212,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:07:23.473596   24858 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:07:23.477577   24858 out.go:177] * [old-k8s-version-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:07:23.484449   24858 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:07:23.484503   24858 notify.go:220] Checking for updates...
	I0729 05:07:23.492518   24858 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:07:23.495512   24858 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:07:23.498486   24858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:07:23.501531   24858 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:07:23.503034   24858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:07:23.506754   24858 config.go:182] Loaded profile config "old-k8s-version-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 05:07:23.509450   24858 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 05:07:23.512590   24858 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:07:23.515511   24858 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 05:07:23.522537   24858 start.go:297] selected driver: qemu2
	I0729 05:07:23.522545   24858 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:07:23.522610   24858 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:07:23.525124   24858 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:07:23.525172   24858 cni.go:84] Creating CNI manager for ""
	I0729 05:07:23.525179   24858 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 05:07:23.525198   24858 start.go:340] cluster config:
	{Name:old-k8s-version-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-051000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:07:23.529109   24858 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:23.537550   24858 out.go:177] * Starting "old-k8s-version-051000" primary control-plane node in "old-k8s-version-051000" cluster
	I0729 05:07:23.541494   24858 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 05:07:23.541507   24858 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 05:07:23.541519   24858 cache.go:56] Caching tarball of preloaded images
	I0729 05:07:23.541564   24858 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:07:23.541569   24858 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 05:07:23.541631   24858 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/old-k8s-version-051000/config.json ...
	I0729 05:07:23.542075   24858 start.go:360] acquireMachinesLock for old-k8s-version-051000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:07:23.542103   24858 start.go:364] duration metric: took 22.5µs to acquireMachinesLock for "old-k8s-version-051000"
	I0729 05:07:23.542112   24858 start.go:96] Skipping create...Using existing machine configuration
	I0729 05:07:23.542117   24858 fix.go:54] fixHost starting: 
	I0729 05:07:23.542231   24858 fix.go:112] recreateIfNeeded on old-k8s-version-051000: state=Stopped err=<nil>
	W0729 05:07:23.542239   24858 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 05:07:23.546588   24858 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-051000" ...
	I0729 05:07:23.554483   24858 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:07:23.554516   24858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:5b:81:2d:9d:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2
	I0729 05:07:23.556612   24858 main.go:141] libmachine: STDOUT: 
	I0729 05:07:23.556632   24858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:07:23.556661   24858 fix.go:56] duration metric: took 14.542958ms for fixHost
	I0729 05:07:23.556665   24858 start.go:83] releasing machines lock for "old-k8s-version-051000", held for 14.557291ms
	W0729 05:07:23.556671   24858 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:07:23.556700   24858 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:23.556704   24858 start.go:729] Will try again in 5 seconds ...
	I0729 05:07:28.558859   24858 start.go:360] acquireMachinesLock for old-k8s-version-051000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:07:28.559381   24858 start.go:364] duration metric: took 399.042µs to acquireMachinesLock for "old-k8s-version-051000"
	I0729 05:07:28.559533   24858 start.go:96] Skipping create...Using existing machine configuration
	I0729 05:07:28.559554   24858 fix.go:54] fixHost starting: 
	I0729 05:07:28.560290   24858 fix.go:112] recreateIfNeeded on old-k8s-version-051000: state=Stopped err=<nil>
	W0729 05:07:28.560315   24858 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 05:07:28.569634   24858 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-051000" ...
	I0729 05:07:28.572665   24858 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:07:28.572902   24858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:5b:81:2d:9d:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/old-k8s-version-051000/disk.qcow2
	I0729 05:07:28.582756   24858 main.go:141] libmachine: STDOUT: 
	I0729 05:07:28.582814   24858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:07:28.582890   24858 fix.go:56] duration metric: took 23.339583ms for fixHost
	I0729 05:07:28.582906   24858 start.go:83] releasing machines lock for "old-k8s-version-051000", held for 23.501875ms
	W0729 05:07:28.583064   24858 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:28.591560   24858 out.go:177] 
	W0729 05:07:28.595700   24858 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:07:28.595720   24858 out.go:239] * 
	* 
	W0729 05:07:28.597909   24858 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:07:28.605703   24858 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-051000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000: exit status 7 (69.386708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-051000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000: exit status 7 (33.113667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-051000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-051000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-051000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.799209ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-051000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-051000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000: exit status 7 (29.752291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-051000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000: exit status 7 (29.652042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-051000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-051000 --alsologtostderr -v=1: exit status 83 (41.905458ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-051000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:07:28.878333   24877 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:07:28.878724   24877 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:28.878727   24877 out.go:304] Setting ErrFile to fd 2...
	I0729 05:07:28.878730   24877 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:28.878901   24877 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:07:28.879102   24877 out.go:298] Setting JSON to false
	I0729 05:07:28.879108   24877 mustload.go:65] Loading cluster: old-k8s-version-051000
	I0729 05:07:28.879299   24877 config.go:182] Loaded profile config "old-k8s-version-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 05:07:28.883384   24877 out.go:177] * The control-plane node old-k8s-version-051000 host is not running: state=Stopped
	I0729 05:07:28.887313   24877 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-051000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-051000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000: exit status 7 (30.13025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-051000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000: exit status 7 (30.533125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-354000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-354000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.963323041s)

                                                
                                                
-- stdout --
	* [no-preload-354000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-354000" primary control-plane node in "no-preload-354000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-354000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:07:29.202861   24894 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:07:29.202986   24894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:29.202989   24894 out.go:304] Setting ErrFile to fd 2...
	I0729 05:07:29.202991   24894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:29.203103   24894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:07:29.204157   24894 out.go:298] Setting JSON to false
	I0729 05:07:29.220389   24894 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11218,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:07:29.220472   24894 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:07:29.224338   24894 out.go:177] * [no-preload-354000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:07:29.231294   24894 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:07:29.231336   24894 notify.go:220] Checking for updates...
	I0729 05:07:29.238322   24894 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:07:29.241338   24894 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:07:29.244263   24894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:07:29.247386   24894 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:07:29.250325   24894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:07:29.253641   24894 config.go:182] Loaded profile config "cert-expiration-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:07:29.253699   24894 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:07:29.253755   24894 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:07:29.258252   24894 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:07:29.265235   24894 start.go:297] selected driver: qemu2
	I0729 05:07:29.265244   24894 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:07:29.265251   24894 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:07:29.267344   24894 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:07:29.271340   24894 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:07:29.274361   24894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:07:29.274379   24894 cni.go:84] Creating CNI manager for ""
	I0729 05:07:29.274385   24894 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:07:29.274393   24894 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 05:07:29.274413   24894 start.go:340] cluster config:
	{Name:no-preload-354000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-354000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:07:29.277929   24894 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:29.286259   24894 out.go:177] * Starting "no-preload-354000" primary control-plane node in "no-preload-354000" cluster
	I0729 05:07:29.290270   24894 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 05:07:29.290338   24894 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/no-preload-354000/config.json ...
	I0729 05:07:29.290351   24894 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/no-preload-354000/config.json: {Name:mk4bf9dd6ae2a6d6500971fcad0ad1de09e78824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:07:29.290343   24894 cache.go:107] acquiring lock: {Name:mk6e9d4699d4fea0baf71716dba43d2ecd2a3927 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:29.290353   24894 cache.go:107] acquiring lock: {Name:mk28c16aba2fbd7763208d3c64695f6cb8c6f9a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:29.290367   24894 cache.go:107] acquiring lock: {Name:mk6712edbd0b7982a4ff4f34c9d38a11c91fb4d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:29.290378   24894 cache.go:107] acquiring lock: {Name:mkf1a428a0d53d7a0687c4af3d2cac14bf1f4ce4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:29.290488   24894 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 05:07:29.290509   24894 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 05:07:29.290545   24894 cache.go:107] acquiring lock: {Name:mk747c74cc86d99b479f3af16ddb7f5d3feedc64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:29.290566   24894 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 05:07:29.290587   24894 start.go:360] acquireMachinesLock for no-preload-354000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:07:29.290625   24894 start.go:364] duration metric: took 32.375µs to acquireMachinesLock for "no-preload-354000"
	I0729 05:07:29.290615   24894 cache.go:107] acquiring lock: {Name:mk4e4a62011b7e7ad4ac747f5675e45c322defab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:29.290650   24894 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 05:07:29.290639   24894 start.go:93] Provisioning new machine with config: &{Name:no-preload-354000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-354000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:07:29.290688   24894 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:07:29.290683   24894 cache.go:107] acquiring lock: {Name:mk61105a322c86f90090a912f5e5c4c3460249d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:29.290749   24894 cache.go:107] acquiring lock: {Name:mk8e82eaa92bc06c4ccbfd4d6148e76645a9281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:29.290781   24894 cache.go:115] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 05:07:29.290793   24894 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 449.75µs
	I0729 05:07:29.290801   24894 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 05:07:29.290804   24894 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 05:07:29.291252   24894 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 05:07:29.291271   24894 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 05:07:29.295257   24894 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 05:07:29.298621   24894 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 05:07:29.298657   24894 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 05:07:29.298678   24894 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 05:07:29.298686   24894 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 05:07:29.298719   24894 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 05:07:29.298752   24894 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 05:07:29.298792   24894 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 05:07:29.312149   24894 start.go:159] libmachine.API.Create for "no-preload-354000" (driver="qemu2")
	I0729 05:07:29.312177   24894 client.go:168] LocalClient.Create starting
	I0729 05:07:29.312278   24894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:07:29.312308   24894 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:29.312317   24894 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:29.312364   24894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:07:29.312387   24894 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:29.312395   24894 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:29.312749   24894 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:07:29.466992   24894 main.go:141] libmachine: Creating SSH key...
	I0729 05:07:29.638740   24894 main.go:141] libmachine: Creating Disk image...
	I0729 05:07:29.638764   24894 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:07:29.639011   24894 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2
	I0729 05:07:29.643679   24894 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 05:07:29.648565   24894 main.go:141] libmachine: STDOUT: 
	I0729 05:07:29.648582   24894 main.go:141] libmachine: STDERR: 
	I0729 05:07:29.648629   24894 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2 +20000M
	I0729 05:07:29.651977   24894 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 05:07:29.656920   24894 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:07:29.656929   24894 main.go:141] libmachine: STDERR: 
	I0729 05:07:29.656942   24894 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2
	I0729 05:07:29.656948   24894 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:07:29.656959   24894 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:07:29.656983   24894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:02:7d:67:f5:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2
	I0729 05:07:29.658707   24894 main.go:141] libmachine: STDOUT: 
	I0729 05:07:29.658725   24894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:07:29.658741   24894 client.go:171] duration metric: took 346.567625ms to LocalClient.Create
	I0729 05:07:29.687530   24894 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0729 05:07:29.688034   24894 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 05:07:29.711485   24894 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0729 05:07:29.729837   24894 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 05:07:29.782121   24894 cache.go:162] opening:  /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 05:07:29.888273   24894 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 05:07:29.888325   24894 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 597.954792ms
	I0729 05:07:29.888349   24894 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 05:07:31.658905   24894 start.go:128] duration metric: took 2.368230125s to createHost
	I0729 05:07:31.658971   24894 start.go:83] releasing machines lock for "no-preload-354000", held for 2.368380334s
	W0729 05:07:31.659066   24894 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:31.674249   24894 out.go:177] * Deleting "no-preload-354000" in qemu2 ...
	W0729 05:07:31.694653   24894 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:31.694699   24894 start.go:729] Will try again in 5 seconds ...
	I0729 05:07:32.152976   24894 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 05:07:32.153027   24894 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.862529083s
	I0729 05:07:32.153053   24894 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 05:07:32.801982   24894 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 05:07:32.802054   24894 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 3.511767875s
	I0729 05:07:32.802079   24894 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 05:07:33.259374   24894 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 05:07:33.259431   24894 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.96895375s
	I0729 05:07:33.259459   24894 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 05:07:33.766039   24894 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 05:07:33.766100   24894 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.475524042s
	I0729 05:07:33.766129   24894 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 05:07:33.792691   24894 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 05:07:33.792733   24894 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 4.502457583s
	I0729 05:07:33.792756   24894 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 05:07:36.695046   24894 start.go:360] acquireMachinesLock for no-preload-354000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:07:36.695493   24894 start.go:364] duration metric: took 363.375µs to acquireMachinesLock for "no-preload-354000"
	I0729 05:07:36.695616   24894 start.go:93] Provisioning new machine with config: &{Name:no-preload-354000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-354000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:07:36.695889   24894 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:07:36.705434   24894 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 05:07:36.756188   24894 start.go:159] libmachine.API.Create for "no-preload-354000" (driver="qemu2")
	I0729 05:07:36.756238   24894 client.go:168] LocalClient.Create starting
	I0729 05:07:36.756370   24894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:07:36.756440   24894 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:36.756461   24894 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:36.756541   24894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:07:36.756585   24894 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:36.756601   24894 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:36.757146   24894 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:07:36.918377   24894 main.go:141] libmachine: Creating SSH key...
	I0729 05:07:37.067464   24894 main.go:141] libmachine: Creating Disk image...
	I0729 05:07:37.067471   24894 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:07:37.067707   24894 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2
	I0729 05:07:37.077214   24894 main.go:141] libmachine: STDOUT: 
	I0729 05:07:37.077232   24894 main.go:141] libmachine: STDERR: 
	I0729 05:07:37.077285   24894 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2 +20000M
	I0729 05:07:37.085286   24894 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:07:37.085301   24894 main.go:141] libmachine: STDERR: 
	I0729 05:07:37.085323   24894 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2
	I0729 05:07:37.085328   24894 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:07:37.085344   24894 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:07:37.085374   24894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:74:3c:00:a5:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2
	I0729 05:07:37.087040   24894 main.go:141] libmachine: STDOUT: 
	I0729 05:07:37.087059   24894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:07:37.087071   24894 client.go:171] duration metric: took 330.83475ms to LocalClient.Create
	I0729 05:07:38.751162   24894 cache.go:157] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 05:07:38.751232   24894 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 9.460783791s
	I0729 05:07:38.751262   24894 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 05:07:38.751314   24894 cache.go:87] Successfully saved all images to host disk.
	I0729 05:07:39.089214   24894 start.go:128] duration metric: took 2.393346s to createHost
	I0729 05:07:39.089296   24894 start.go:83] releasing machines lock for "no-preload-354000", held for 2.393814708s
	W0729 05:07:39.089646   24894 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-354000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-354000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:39.106102   24894 out.go:177] 
	W0729 05:07:39.110170   24894 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:07:39.110205   24894 out.go:239] * 
	* 
	W0729 05:07:39.112630   24894 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:07:39.126030   24894 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-354000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000: exit status 7 (68.879042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-354000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-354000 create -f testdata/busybox.yaml: exit status 1 (29.209416ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-354000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-354000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000: exit status 7 (29.923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-354000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000: exit status 7 (29.734833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-354000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-354000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-354000 describe deploy/metrics-server -n kube-system: exit status 1 (26.7525ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-354000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-354000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000: exit status 7 (29.892208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-354000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-354000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.179316666s)

                                                
                                                
-- stdout --
	* [no-preload-354000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-354000" primary control-plane node in "no-preload-354000" cluster
	* Restarting existing qemu2 VM for "no-preload-354000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-354000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:07:42.476398   24973 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:07:42.476521   24973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:42.476524   24973 out.go:304] Setting ErrFile to fd 2...
	I0729 05:07:42.476526   24973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:42.476640   24973 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:07:42.477644   24973 out.go:298] Setting JSON to false
	I0729 05:07:42.493809   24973 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11231,"bootTime":1722243631,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:07:42.493874   24973 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:07:42.498638   24973 out.go:177] * [no-preload-354000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:07:42.505610   24973 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:07:42.505685   24973 notify.go:220] Checking for updates...
	I0729 05:07:42.511530   24973 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:07:42.514590   24973 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:07:42.517622   24973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:07:42.520516   24973 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:07:42.523599   24973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:07:42.526837   24973 config.go:182] Loaded profile config "no-preload-354000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 05:07:42.527096   24973 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:07:42.530527   24973 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 05:07:42.537574   24973 start.go:297] selected driver: qemu2
	I0729 05:07:42.537581   24973 start.go:901] validating driver "qemu2" against &{Name:no-preload-354000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-354000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:07:42.537624   24973 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:07:42.539895   24973 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:07:42.539915   24973 cni.go:84] Creating CNI manager for ""
	I0729 05:07:42.539923   24973 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:07:42.539948   24973 start.go:340] cluster config:
	{Name:no-preload-354000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-354000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:07:42.543522   24973 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:42.552592   24973 out.go:177] * Starting "no-preload-354000" primary control-plane node in "no-preload-354000" cluster
	I0729 05:07:42.556396   24973 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 05:07:42.556482   24973 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/no-preload-354000/config.json ...
	I0729 05:07:42.556523   24973 cache.go:107] acquiring lock: {Name:mk6e9d4699d4fea0baf71716dba43d2ecd2a3927 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:42.556522   24973 cache.go:107] acquiring lock: {Name:mk28c16aba2fbd7763208d3c64695f6cb8c6f9a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:42.556536   24973 cache.go:107] acquiring lock: {Name:mk61105a322c86f90090a912f5e5c4c3460249d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:42.556591   24973 cache.go:115] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 05:07:42.556595   24973 cache.go:115] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 05:07:42.556596   24973 cache.go:115] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 05:07:42.556603   24973 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 75.584µs
	I0729 05:07:42.556605   24973 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 87.708µs
	I0729 05:07:42.556616   24973 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 05:07:42.556609   24973 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 05:07:42.556597   24973 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 75.625µs
	I0729 05:07:42.556625   24973 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 05:07:42.556605   24973 cache.go:107] acquiring lock: {Name:mkf1a428a0d53d7a0687c4af3d2cac14bf1f4ce4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:42.556630   24973 cache.go:107] acquiring lock: {Name:mk8e82eaa92bc06c4ccbfd4d6148e76645a9281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:42.556633   24973 cache.go:107] acquiring lock: {Name:mk4e4a62011b7e7ad4ac747f5675e45c322defab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:42.556667   24973 cache.go:115] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 05:07:42.556616   24973 cache.go:107] acquiring lock: {Name:mk747c74cc86d99b479f3af16ddb7f5d3feedc64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:42.556701   24973 cache.go:107] acquiring lock: {Name:mk6712edbd0b7982a4ff4f34c9d38a11c91fb4d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:42.556671   24973 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 66.292µs
	I0729 05:07:42.556678   24973 cache.go:115] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 05:07:42.556732   24973 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 05:07:42.556738   24973 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 107.833µs
	I0729 05:07:42.556743   24973 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 05:07:42.556683   24973 cache.go:115] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 05:07:42.556748   24973 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 115.625µs
	I0729 05:07:42.556752   24973 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 05:07:42.556746   24973 cache.go:115] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 05:07:42.556756   24973 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 102.083µs
	I0729 05:07:42.556756   24973 cache.go:115] /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 05:07:42.556759   24973 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 05:07:42.556763   24973 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 148.166µs
	I0729 05:07:42.556767   24973 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 05:07:42.556772   24973 cache.go:87] Successfully saved all images to host disk.
	I0729 05:07:42.556935   24973 start.go:360] acquireMachinesLock for no-preload-354000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:07:42.556963   24973 start.go:364] duration metric: took 22.75µs to acquireMachinesLock for "no-preload-354000"
	I0729 05:07:42.556974   24973 start.go:96] Skipping create...Using existing machine configuration
	I0729 05:07:42.556981   24973 fix.go:54] fixHost starting: 
	I0729 05:07:42.557100   24973 fix.go:112] recreateIfNeeded on no-preload-354000: state=Stopped err=<nil>
	W0729 05:07:42.557109   24973 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 05:07:42.565438   24973 out.go:177] * Restarting existing qemu2 VM for "no-preload-354000" ...
	I0729 05:07:42.569547   24973 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:07:42.569588   24973 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:74:3c:00:a5:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2
	I0729 05:07:42.571662   24973 main.go:141] libmachine: STDOUT: 
	I0729 05:07:42.571685   24973 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:07:42.571716   24973 fix.go:56] duration metric: took 14.73625ms for fixHost
	I0729 05:07:42.571721   24973 start.go:83] releasing machines lock for "no-preload-354000", held for 14.753667ms
	W0729 05:07:42.571727   24973 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:07:42.571753   24973 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:42.571758   24973 start.go:729] Will try again in 5 seconds ...
	I0729 05:07:47.573830   24973 start.go:360] acquireMachinesLock for no-preload-354000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:07:47.574247   24973 start.go:364] duration metric: took 324.375µs to acquireMachinesLock for "no-preload-354000"
	I0729 05:07:47.574410   24973 start.go:96] Skipping create...Using existing machine configuration
	I0729 05:07:47.574429   24973 fix.go:54] fixHost starting: 
	I0729 05:07:47.575102   24973 fix.go:112] recreateIfNeeded on no-preload-354000: state=Stopped err=<nil>
	W0729 05:07:47.575126   24973 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 05:07:47.578472   24973 out.go:177] * Restarting existing qemu2 VM for "no-preload-354000" ...
	I0729 05:07:47.582499   24973 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:07:47.582753   24973 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:74:3c:00:a5:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/no-preload-354000/disk.qcow2
	I0729 05:07:47.591492   24973 main.go:141] libmachine: STDOUT: 
	I0729 05:07:47.591558   24973 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:07:47.591629   24973 fix.go:56] duration metric: took 17.196541ms for fixHost
	I0729 05:07:47.591649   24973 start.go:83] releasing machines lock for "no-preload-354000", held for 17.358416ms
	W0729 05:07:47.591802   24973 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-354000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-354000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:47.599410   24973 out.go:177] 
	W0729 05:07:47.602466   24973 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:07:47.602502   24973 out.go:239] * 
	* 
	W0729 05:07:47.605298   24973 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:07:47.618414   24973 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-354000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000: exit status 7 (67.904417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-354000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000: exit status 7 (32.489875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-354000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-354000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-354000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.886542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-354000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-354000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000: exit status 7 (30.228166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-354000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000: exit status 7 (30.176667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-354000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-354000 --alsologtostderr -v=1: exit status 83 (41.24925ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-354000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-354000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:07:47.885783   24992 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:07:47.885946   24992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:47.885950   24992 out.go:304] Setting ErrFile to fd 2...
	I0729 05:07:47.885952   24992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:47.886079   24992 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:07:47.886288   24992 out.go:298] Setting JSON to false
	I0729 05:07:47.886295   24992 mustload.go:65] Loading cluster: no-preload-354000
	I0729 05:07:47.886491   24992 config.go:182] Loaded profile config "no-preload-354000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 05:07:47.890340   24992 out.go:177] * The control-plane node no-preload-354000 host is not running: state=Stopped
	I0729 05:07:47.894405   24992 out.go:177]   To start a cluster, run: "minikube start -p no-preload-354000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-354000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000: exit status 7 (29.229625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-354000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000: exit status 7 (29.574875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-128000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-128000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.998637542s)

                                                
                                                
-- stdout --
	* [embed-certs-128000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-128000" primary control-plane node in "embed-certs-128000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-128000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:07:48.201612   25009 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:07:48.201730   25009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:48.201733   25009 out.go:304] Setting ErrFile to fd 2...
	I0729 05:07:48.201735   25009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:07:48.201851   25009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:07:48.202856   25009 out.go:298] Setting JSON to false
	I0729 05:07:48.218995   25009 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11237,"bootTime":1722243631,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:07:48.219064   25009 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:07:48.224373   25009 out.go:177] * [embed-certs-128000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:07:48.231383   25009 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:07:48.231440   25009 notify.go:220] Checking for updates...
	I0729 05:07:48.239315   25009 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:07:48.242360   25009 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:07:48.245357   25009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:07:48.248400   25009 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:07:48.251348   25009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:07:48.254663   25009 config.go:182] Loaded profile config "cert-expiration-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:07:48.254727   25009 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:07:48.254770   25009 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:07:48.259290   25009 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:07:48.266337   25009 start.go:297] selected driver: qemu2
	I0729 05:07:48.266342   25009 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:07:48.266347   25009 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:07:48.268618   25009 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:07:48.273252   25009 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:07:48.276427   25009 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:07:48.276447   25009 cni.go:84] Creating CNI manager for ""
	I0729 05:07:48.276455   25009 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:07:48.276459   25009 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 05:07:48.276483   25009 start.go:340] cluster config:
	{Name:embed-certs-128000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-128000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:07:48.280261   25009 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:07:48.288350   25009 out.go:177] * Starting "embed-certs-128000" primary control-plane node in "embed-certs-128000" cluster
	I0729 05:07:48.292465   25009 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:07:48.292482   25009 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:07:48.292492   25009 cache.go:56] Caching tarball of preloaded images
	I0729 05:07:48.292559   25009 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:07:48.292565   25009 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:07:48.292644   25009 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/embed-certs-128000/config.json ...
	I0729 05:07:48.292655   25009 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/embed-certs-128000/config.json: {Name:mke25c473bd32967ca9cfcb33937e0e25e4ffb24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:07:48.292891   25009 start.go:360] acquireMachinesLock for embed-certs-128000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:07:48.292930   25009 start.go:364] duration metric: took 29.791µs to acquireMachinesLock for "embed-certs-128000"
	I0729 05:07:48.292944   25009 start.go:93] Provisioning new machine with config: &{Name:embed-certs-128000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-128000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:07:48.292974   25009 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:07:48.301306   25009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 05:07:48.319857   25009 start.go:159] libmachine.API.Create for "embed-certs-128000" (driver="qemu2")
	I0729 05:07:48.319883   25009 client.go:168] LocalClient.Create starting
	I0729 05:07:48.319946   25009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:07:48.319976   25009 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:48.319989   25009 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:48.320025   25009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:07:48.320049   25009 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:48.320060   25009 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:48.320434   25009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:07:48.473231   25009 main.go:141] libmachine: Creating SSH key...
	I0729 05:07:48.657290   25009 main.go:141] libmachine: Creating Disk image...
	I0729 05:07:48.657301   25009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:07:48.657542   25009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2
	I0729 05:07:48.666914   25009 main.go:141] libmachine: STDOUT: 
	I0729 05:07:48.666933   25009 main.go:141] libmachine: STDERR: 
	I0729 05:07:48.666979   25009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2 +20000M
	I0729 05:07:48.674853   25009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:07:48.674868   25009 main.go:141] libmachine: STDERR: 
	I0729 05:07:48.674895   25009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2
	I0729 05:07:48.674905   25009 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:07:48.674915   25009 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:07:48.674948   25009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:dd:dc:1a:5d:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2
	I0729 05:07:48.676534   25009 main.go:141] libmachine: STDOUT: 
	I0729 05:07:48.676551   25009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:07:48.676569   25009 client.go:171] duration metric: took 356.688542ms to LocalClient.Create
	I0729 05:07:50.678713   25009 start.go:128] duration metric: took 2.385758875s to createHost
	I0729 05:07:50.678772   25009 start.go:83] releasing machines lock for "embed-certs-128000", held for 2.385875917s
	W0729 05:07:50.678869   25009 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:50.693986   25009 out.go:177] * Deleting "embed-certs-128000" in qemu2 ...
	W0729 05:07:50.720614   25009 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:50.720641   25009 start.go:729] Will try again in 5 seconds ...
	I0729 05:07:55.722797   25009 start.go:360] acquireMachinesLock for embed-certs-128000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:07:55.723251   25009 start.go:364] duration metric: took 352.209µs to acquireMachinesLock for "embed-certs-128000"
	I0729 05:07:55.723395   25009 start.go:93] Provisioning new machine with config: &{Name:embed-certs-128000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-128000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:07:55.723705   25009 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:07:55.739089   25009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 05:07:55.787440   25009 start.go:159] libmachine.API.Create for "embed-certs-128000" (driver="qemu2")
	I0729 05:07:55.787490   25009 client.go:168] LocalClient.Create starting
	I0729 05:07:55.787610   25009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:07:55.787664   25009 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:55.787680   25009 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:55.787753   25009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:07:55.787814   25009 main.go:141] libmachine: Decoding PEM data...
	I0729 05:07:55.787824   25009 main.go:141] libmachine: Parsing certificate...
	I0729 05:07:55.788445   25009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:07:55.959854   25009 main.go:141] libmachine: Creating SSH key...
	I0729 05:07:56.108880   25009 main.go:141] libmachine: Creating Disk image...
	I0729 05:07:56.108894   25009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:07:56.109116   25009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2
	I0729 05:07:56.118254   25009 main.go:141] libmachine: STDOUT: 
	I0729 05:07:56.118279   25009 main.go:141] libmachine: STDERR: 
	I0729 05:07:56.118338   25009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2 +20000M
	I0729 05:07:56.126165   25009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:07:56.126180   25009 main.go:141] libmachine: STDERR: 
	I0729 05:07:56.126193   25009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2
	I0729 05:07:56.126197   25009 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:07:56.126214   25009 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:07:56.126248   25009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:bd:01:c8:0f:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2
	I0729 05:07:56.127872   25009 main.go:141] libmachine: STDOUT: 
	I0729 05:07:56.127890   25009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:07:56.127903   25009 client.go:171] duration metric: took 340.413791ms to LocalClient.Create
	I0729 05:07:58.130041   25009 start.go:128] duration metric: took 2.4063535s to createHost
	I0729 05:07:58.130101   25009 start.go:83] releasing machines lock for "embed-certs-128000", held for 2.406870417s
	W0729 05:07:58.130498   25009 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-128000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-128000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:07:58.143117   25009 out.go:177] 
	W0729 05:07:58.146236   25009 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:07:58.146260   25009 out.go:239] * 
	* 
	W0729 05:07:58.148644   25009 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:07:58.160065   25009 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-128000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000: exit status 7 (66.245166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-128000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-128000 create -f testdata/busybox.yaml: exit status 1 (30.118709ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-128000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-128000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000: exit status 7 (30.853583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-128000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000: exit status 7 (29.735625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-128000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-128000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-128000 describe deploy/metrics-server -n kube-system: exit status 1 (26.9265ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-128000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-128000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000: exit status 7 (30.040041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-128000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-128000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.969167791s)

                                                
                                                
-- stdout --
	* [embed-certs-128000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-128000" primary control-plane node in "embed-certs-128000" cluster
	* Restarting existing qemu2 VM for "embed-certs-128000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-128000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:08:01.741974   25064 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:08:01.742104   25064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:01.742108   25064 out.go:304] Setting ErrFile to fd 2...
	I0729 05:08:01.742110   25064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:01.742250   25064 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:08:01.743252   25064 out.go:298] Setting JSON to false
	I0729 05:08:01.759396   25064 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11250,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:08:01.759463   25064 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:08:01.764125   25064 out.go:177] * [embed-certs-128000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:08:01.772083   25064 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:08:01.772136   25064 notify.go:220] Checking for updates...
	I0729 05:08:01.781046   25064 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:08:01.784040   25064 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:08:01.787157   25064 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:08:01.790093   25064 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:08:01.793053   25064 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:08:01.796329   25064 config.go:182] Loaded profile config "embed-certs-128000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:08:01.796598   25064 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:08:01.801015   25064 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 05:08:01.808094   25064 start.go:297] selected driver: qemu2
	I0729 05:08:01.808100   25064 start.go:901] validating driver "qemu2" against &{Name:embed-certs-128000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-128000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:08:01.808168   25064 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:08:01.810396   25064 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:08:01.810446   25064 cni.go:84] Creating CNI manager for ""
	I0729 05:08:01.810453   25064 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:08:01.810470   25064 start.go:340] cluster config:
	{Name:embed-certs-128000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-128000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:08:01.813896   25064 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:08:01.822089   25064 out.go:177] * Starting "embed-certs-128000" primary control-plane node in "embed-certs-128000" cluster
	I0729 05:08:01.825984   25064 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:08:01.825999   25064 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:08:01.826008   25064 cache.go:56] Caching tarball of preloaded images
	I0729 05:08:01.826055   25064 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:08:01.826061   25064 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:08:01.826116   25064 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/embed-certs-128000/config.json ...
	I0729 05:08:01.826644   25064 start.go:360] acquireMachinesLock for embed-certs-128000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:08:01.826674   25064 start.go:364] duration metric: took 23.875µs to acquireMachinesLock for "embed-certs-128000"
	I0729 05:08:01.826684   25064 start.go:96] Skipping create...Using existing machine configuration
	I0729 05:08:01.826690   25064 fix.go:54] fixHost starting: 
	I0729 05:08:01.826816   25064 fix.go:112] recreateIfNeeded on embed-certs-128000: state=Stopped err=<nil>
	W0729 05:08:01.826825   25064 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 05:08:01.833903   25064 out.go:177] * Restarting existing qemu2 VM for "embed-certs-128000" ...
	I0729 05:08:01.838087   25064 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:08:01.838136   25064 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:bd:01:c8:0f:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2
	I0729 05:08:01.840189   25064 main.go:141] libmachine: STDOUT: 
	I0729 05:08:01.840213   25064 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:08:01.840240   25064 fix.go:56] duration metric: took 13.550334ms for fixHost
	I0729 05:08:01.840245   25064 start.go:83] releasing machines lock for "embed-certs-128000", held for 13.56625ms
	W0729 05:08:01.840252   25064 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:08:01.840296   25064 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:08:01.840302   25064 start.go:729] Will try again in 5 seconds ...
	I0729 05:08:06.842439   25064 start.go:360] acquireMachinesLock for embed-certs-128000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:08:08.605503   25064 start.go:364] duration metric: took 1.762951792s to acquireMachinesLock for "embed-certs-128000"
	I0729 05:08:08.605612   25064 start.go:96] Skipping create...Using existing machine configuration
	I0729 05:08:08.605629   25064 fix.go:54] fixHost starting: 
	I0729 05:08:08.606382   25064 fix.go:112] recreateIfNeeded on embed-certs-128000: state=Stopped err=<nil>
	W0729 05:08:08.606409   25064 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 05:08:08.613898   25064 out.go:177] * Restarting existing qemu2 VM for "embed-certs-128000" ...
	I0729 05:08:08.628901   25064 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:08:08.629203   25064 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:bd:01:c8:0f:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/embed-certs-128000/disk.qcow2
	I0729 05:08:08.639330   25064 main.go:141] libmachine: STDOUT: 
	I0729 05:08:08.639390   25064 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:08:08.639470   25064 fix.go:56] duration metric: took 33.841791ms for fixHost
	I0729 05:08:08.639487   25064 start.go:83] releasing machines lock for "embed-certs-128000", held for 33.910292ms
	W0729 05:08:08.639679   25064 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-128000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-128000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:08:08.647903   25064 out.go:177] 
	W0729 05:08:08.651919   25064 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:08:08.651942   25064 out.go:239] * 
	* 
	W0729 05:08:08.654288   25064 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:08:08.665852   25064 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-128000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000: exit status 7 (60.473667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-267000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-267000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.883876959s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-267000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-267000" primary control-plane node in "default-k8s-diff-port-267000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-267000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:08:06.245514   25084 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:08:06.245629   25084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:06.245633   25084 out.go:304] Setting ErrFile to fd 2...
	I0729 05:08:06.245635   25084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:06.245770   25084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:08:06.246885   25084 out.go:298] Setting JSON to false
	I0729 05:08:06.262960   25084 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11255,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:08:06.263018   25084 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:08:06.267248   25084 out.go:177] * [default-k8s-diff-port-267000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:08:06.274272   25084 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:08:06.274316   25084 notify.go:220] Checking for updates...
	I0729 05:08:06.282162   25084 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:08:06.285232   25084 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:08:06.288216   25084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:08:06.291142   25084 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:08:06.294198   25084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:08:06.297443   25084 config.go:182] Loaded profile config "embed-certs-128000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:08:06.297506   25084 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:08:06.297553   25084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:08:06.301209   25084 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:08:06.308132   25084 start.go:297] selected driver: qemu2
	I0729 05:08:06.308138   25084 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:08:06.308144   25084 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:08:06.310395   25084 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:08:06.314146   25084 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:08:06.317304   25084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:08:06.317323   25084 cni.go:84] Creating CNI manager for ""
	I0729 05:08:06.317335   25084 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:08:06.317340   25084 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 05:08:06.317372   25084 start.go:340] cluster config:
	{Name:default-k8s-diff-port-267000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-267000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:08:06.320882   25084 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:08:06.329179   25084 out.go:177] * Starting "default-k8s-diff-port-267000" primary control-plane node in "default-k8s-diff-port-267000" cluster
	I0729 05:08:06.333149   25084 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:08:06.333165   25084 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:08:06.333177   25084 cache.go:56] Caching tarball of preloaded images
	I0729 05:08:06.333239   25084 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:08:06.333244   25084 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:08:06.333311   25084 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/default-k8s-diff-port-267000/config.json ...
	I0729 05:08:06.333322   25084 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/default-k8s-diff-port-267000/config.json: {Name:mkd115d6e946a7583d21d63768a9db34654c036d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:08:06.333534   25084 start.go:360] acquireMachinesLock for default-k8s-diff-port-267000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:08:06.333567   25084 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "default-k8s-diff-port-267000"
	I0729 05:08:06.333578   25084 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-267000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-267000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:08:06.333608   25084 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:08:06.342154   25084 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 05:08:06.359797   25084 start.go:159] libmachine.API.Create for "default-k8s-diff-port-267000" (driver="qemu2")
	I0729 05:08:06.359827   25084 client.go:168] LocalClient.Create starting
	I0729 05:08:06.359885   25084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:08:06.359919   25084 main.go:141] libmachine: Decoding PEM data...
	I0729 05:08:06.359928   25084 main.go:141] libmachine: Parsing certificate...
	I0729 05:08:06.359969   25084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:08:06.359990   25084 main.go:141] libmachine: Decoding PEM data...
	I0729 05:08:06.359997   25084 main.go:141] libmachine: Parsing certificate...
	I0729 05:08:06.360362   25084 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:08:06.511659   25084 main.go:141] libmachine: Creating SSH key...
	I0729 05:08:06.584223   25084 main.go:141] libmachine: Creating Disk image...
	I0729 05:08:06.584230   25084 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:08:06.584411   25084 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2
	I0729 05:08:06.593493   25084 main.go:141] libmachine: STDOUT: 
	I0729 05:08:06.593517   25084 main.go:141] libmachine: STDERR: 
	I0729 05:08:06.593565   25084 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2 +20000M
	I0729 05:08:06.601402   25084 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:08:06.601424   25084 main.go:141] libmachine: STDERR: 
	I0729 05:08:06.601444   25084 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2
	I0729 05:08:06.601448   25084 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:08:06.601462   25084 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:08:06.601490   25084 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:67:05:2d:58:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2
	I0729 05:08:06.603080   25084 main.go:141] libmachine: STDOUT: 
	I0729 05:08:06.603095   25084 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:08:06.603116   25084 client.go:171] duration metric: took 243.289ms to LocalClient.Create
	I0729 05:08:08.605249   25084 start.go:128] duration metric: took 2.271666375s to createHost
	I0729 05:08:08.605304   25084 start.go:83] releasing machines lock for "default-k8s-diff-port-267000", held for 2.271770125s
	W0729 05:08:08.605355   25084 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:08:08.624892   25084 out.go:177] * Deleting "default-k8s-diff-port-267000" in qemu2 ...
	W0729 05:08:08.681091   25084 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:08:08.681130   25084 start.go:729] Will try again in 5 seconds ...
	I0729 05:08:13.683226   25084 start.go:360] acquireMachinesLock for default-k8s-diff-port-267000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:08:13.683750   25084 start.go:364] duration metric: took 358.792µs to acquireMachinesLock for "default-k8s-diff-port-267000"
	I0729 05:08:13.683926   25084 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-267000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-267000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:08:13.684224   25084 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:08:13.692826   25084 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 05:08:13.743732   25084 start.go:159] libmachine.API.Create for "default-k8s-diff-port-267000" (driver="qemu2")
	I0729 05:08:13.743784   25084 client.go:168] LocalClient.Create starting
	I0729 05:08:13.743901   25084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:08:13.743977   25084 main.go:141] libmachine: Decoding PEM data...
	I0729 05:08:13.743993   25084 main.go:141] libmachine: Parsing certificate...
	I0729 05:08:13.744052   25084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:08:13.744098   25084 main.go:141] libmachine: Decoding PEM data...
	I0729 05:08:13.744113   25084 main.go:141] libmachine: Parsing certificate...
	I0729 05:08:13.745230   25084 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:08:13.913128   25084 main.go:141] libmachine: Creating SSH key...
	I0729 05:08:14.038039   25084 main.go:141] libmachine: Creating Disk image...
	I0729 05:08:14.038048   25084 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:08:14.038217   25084 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2
	I0729 05:08:14.047611   25084 main.go:141] libmachine: STDOUT: 
	I0729 05:08:14.047629   25084 main.go:141] libmachine: STDERR: 
	I0729 05:08:14.047675   25084 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2 +20000M
	I0729 05:08:14.055463   25084 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:08:14.055478   25084 main.go:141] libmachine: STDERR: 
	I0729 05:08:14.055494   25084 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2
	I0729 05:08:14.055504   25084 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:08:14.055513   25084 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:08:14.055537   25084 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:13:5c:d0:d1:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2
	I0729 05:08:14.057164   25084 main.go:141] libmachine: STDOUT: 
	I0729 05:08:14.057186   25084 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:08:14.057199   25084 client.go:171] duration metric: took 313.415292ms to LocalClient.Create
	I0729 05:08:16.059339   25084 start.go:128] duration metric: took 2.3751215s to createHost
	I0729 05:08:16.059473   25084 start.go:83] releasing machines lock for "default-k8s-diff-port-267000", held for 2.375664291s
	W0729 05:08:16.059841   25084 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-267000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-267000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:08:16.071413   25084 out.go:177] 
	W0729 05:08:16.075550   25084 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:08:16.075576   25084 out.go:239] * 
	* 
	W0729 05:08:16.078164   25084 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:08:16.087539   25084 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-267000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000: exit status 7 (65.300292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-267000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-128000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000: exit status 7 (31.506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-128000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-128000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-128000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.926542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-128000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-128000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000: exit status 7 (28.752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-128000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000: exit status 7 (28.731667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-128000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-128000 --alsologtostderr -v=1: exit status 83 (45.872833ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-128000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-128000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:08:08.928331   25106 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:08:08.928462   25106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:08.928465   25106 out.go:304] Setting ErrFile to fd 2...
	I0729 05:08:08.928467   25106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:08.928598   25106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:08:08.928808   25106 out.go:298] Setting JSON to false
	I0729 05:08:08.928815   25106 mustload.go:65] Loading cluster: embed-certs-128000
	I0729 05:08:08.929013   25106 config.go:182] Loaded profile config "embed-certs-128000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:08:08.933939   25106 out.go:177] * The control-plane node embed-certs-128000 host is not running: state=Stopped
	I0729 05:08:08.941997   25106 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-128000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-128000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000: exit status 7 (28.471791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-128000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000: exit status 7 (29.062875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-577000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-577000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.853550667s)

                                                
                                                
-- stdout --
	* [newest-cni-577000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-577000" primary control-plane node in "newest-cni-577000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-577000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:08:09.250273   25123 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:08:09.250396   25123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:09.250400   25123 out.go:304] Setting ErrFile to fd 2...
	I0729 05:08:09.250402   25123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:09.250537   25123 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:08:09.251635   25123 out.go:298] Setting JSON to false
	I0729 05:08:09.267797   25123 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11258,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:08:09.267888   25123 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:08:09.272044   25123 out.go:177] * [newest-cni-577000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:08:09.278893   25123 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:08:09.278951   25123 notify.go:220] Checking for updates...
	I0729 05:08:09.283420   25123 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:08:09.286811   25123 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:08:09.289868   25123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:08:09.292913   25123 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:08:09.295893   25123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:08:09.299187   25123 config.go:182] Loaded profile config "default-k8s-diff-port-267000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:08:09.299254   25123 config.go:182] Loaded profile config "multinode-623000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:08:09.299303   25123 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:08:09.303901   25123 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 05:08:09.310841   25123 start.go:297] selected driver: qemu2
	I0729 05:08:09.310847   25123 start.go:901] validating driver "qemu2" against <nil>
	I0729 05:08:09.310852   25123 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:08:09.313135   25123 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 05:08:09.313155   25123 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 05:08:09.319865   25123 out.go:177] * Automatically selected the socket_vmnet network
	I0729 05:08:09.323022   25123 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 05:08:09.323066   25123 cni.go:84] Creating CNI manager for ""
	I0729 05:08:09.323073   25123 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:08:09.323079   25123 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 05:08:09.323112   25123 start.go:340] cluster config:
	{Name:newest-cni-577000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-577000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:08:09.326856   25123 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:08:09.335871   25123 out.go:177] * Starting "newest-cni-577000" primary control-plane node in "newest-cni-577000" cluster
	I0729 05:08:09.339949   25123 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 05:08:09.339967   25123 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 05:08:09.339977   25123 cache.go:56] Caching tarball of preloaded images
	I0729 05:08:09.340049   25123 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:08:09.340056   25123 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 05:08:09.340127   25123 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/newest-cni-577000/config.json ...
	I0729 05:08:09.340139   25123 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/newest-cni-577000/config.json: {Name:mkadb4686fc2608c683a6b12f6184a95732c45bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 05:08:09.340367   25123 start.go:360] acquireMachinesLock for newest-cni-577000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:08:09.340403   25123 start.go:364] duration metric: took 30.209µs to acquireMachinesLock for "newest-cni-577000"
	I0729 05:08:09.340416   25123 start.go:93] Provisioning new machine with config: &{Name:newest-cni-577000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-577000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:08:09.340449   25123 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:08:09.348885   25123 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 05:08:09.367088   25123 start.go:159] libmachine.API.Create for "newest-cni-577000" (driver="qemu2")
	I0729 05:08:09.367123   25123 client.go:168] LocalClient.Create starting
	I0729 05:08:09.367185   25123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:08:09.367214   25123 main.go:141] libmachine: Decoding PEM data...
	I0729 05:08:09.367230   25123 main.go:141] libmachine: Parsing certificate...
	I0729 05:08:09.367267   25123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:08:09.367296   25123 main.go:141] libmachine: Decoding PEM data...
	I0729 05:08:09.367304   25123 main.go:141] libmachine: Parsing certificate...
	I0729 05:08:09.367730   25123 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:08:09.519352   25123 main.go:141] libmachine: Creating SSH key...
	I0729 05:08:09.604645   25123 main.go:141] libmachine: Creating Disk image...
	I0729 05:08:09.604655   25123 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:08:09.604838   25123 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2
	I0729 05:08:09.614082   25123 main.go:141] libmachine: STDOUT: 
	I0729 05:08:09.614098   25123 main.go:141] libmachine: STDERR: 
	I0729 05:08:09.614142   25123 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2 +20000M
	I0729 05:08:09.621972   25123 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:08:09.621987   25123 main.go:141] libmachine: STDERR: 
	I0729 05:08:09.622005   25123 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2
	I0729 05:08:09.622010   25123 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:08:09.622025   25123 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:08:09.622050   25123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:dd:fa:72:24:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2
	I0729 05:08:09.623695   25123 main.go:141] libmachine: STDOUT: 
	I0729 05:08:09.623709   25123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:08:09.623730   25123 client.go:171] duration metric: took 256.606667ms to LocalClient.Create
	I0729 05:08:11.625867   25123 start.go:128] duration metric: took 2.285438084s to createHost
	I0729 05:08:11.625915   25123 start.go:83] releasing machines lock for "newest-cni-577000", held for 2.28554375s
	W0729 05:08:11.625987   25123 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:08:11.638754   25123 out.go:177] * Deleting "newest-cni-577000" in qemu2 ...
	W0729 05:08:11.665024   25123 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:08:11.665078   25123 start.go:729] Will try again in 5 seconds ...
	I0729 05:08:16.667222   25123 start.go:360] acquireMachinesLock for newest-cni-577000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:08:16.667564   25123 start.go:364] duration metric: took 248.708µs to acquireMachinesLock for "newest-cni-577000"
	I0729 05:08:16.667705   25123 start.go:93] Provisioning new machine with config: &{Name:newest-cni-577000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-577000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:08:16.667935   25123 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 05:08:16.677472   25123 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 05:08:16.718485   25123 start.go:159] libmachine.API.Create for "newest-cni-577000" (driver="qemu2")
	I0729 05:08:16.718539   25123 client.go:168] LocalClient.Create starting
	I0729 05:08:16.718630   25123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/ca.pem
	I0729 05:08:16.718675   25123 main.go:141] libmachine: Decoding PEM data...
	I0729 05:08:16.718695   25123 main.go:141] libmachine: Parsing certificate...
	I0729 05:08:16.718763   25123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-21024/.minikube/certs/cert.pem
	I0729 05:08:16.718802   25123 main.go:141] libmachine: Decoding PEM data...
	I0729 05:08:16.718814   25123 main.go:141] libmachine: Parsing certificate...
	I0729 05:08:16.719414   25123 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 05:08:16.883532   25123 main.go:141] libmachine: Creating SSH key...
	I0729 05:08:17.015342   25123 main.go:141] libmachine: Creating Disk image...
	I0729 05:08:17.015351   25123 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 05:08:17.015600   25123 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2.raw /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2
	I0729 05:08:17.025205   25123 main.go:141] libmachine: STDOUT: 
	I0729 05:08:17.025222   25123 main.go:141] libmachine: STDERR: 
	I0729 05:08:17.025279   25123 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2 +20000M
	I0729 05:08:17.033174   25123 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 05:08:17.033189   25123 main.go:141] libmachine: STDERR: 
	I0729 05:08:17.033200   25123 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2
	I0729 05:08:17.033204   25123 main.go:141] libmachine: Starting QEMU VM...
	I0729 05:08:17.033214   25123 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:08:17.033249   25123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:31:3a:b4:ba:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2
	I0729 05:08:17.034866   25123 main.go:141] libmachine: STDOUT: 
	I0729 05:08:17.034881   25123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:08:17.034893   25123 client.go:171] duration metric: took 316.353292ms to LocalClient.Create
	I0729 05:08:19.037045   25123 start.go:128] duration metric: took 2.369114333s to createHost
	I0729 05:08:19.037171   25123 start.go:83] releasing machines lock for "newest-cni-577000", held for 2.36960325s
	W0729 05:08:19.037545   25123 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-577000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-577000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:08:19.050169   25123 out.go:177] 
	W0729 05:08:19.053284   25123 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:08:19.053317   25123 out.go:239] * 
	* 
	W0729 05:08:19.054844   25123 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:08:19.065141   25123 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-577000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-577000 -n newest-cni-577000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-577000 -n newest-cni-577000: exit status 7 (66.289791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-577000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-267000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-267000 create -f testdata/busybox.yaml: exit status 1 (29.860834ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-267000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-267000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000: exit status 7 (28.603708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-267000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000: exit status 7 (28.892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-267000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-267000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-267000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-267000 describe deploy/metrics-server -n kube-system: exit status 1 (26.855834ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-267000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-267000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000: exit status 7 (29.496333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-267000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-267000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-267000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.191223458s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-267000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-267000" primary control-plane node in "default-k8s-diff-port-267000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-267000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-267000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:08:20.045779   25187 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:08:20.045960   25187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:20.045964   25187 out.go:304] Setting ErrFile to fd 2...
	I0729 05:08:20.045966   25187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:20.046106   25187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:08:20.047150   25187 out.go:298] Setting JSON to false
	I0729 05:08:20.063405   25187 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11269,"bootTime":1722243631,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:08:20.063511   25187 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:08:20.067828   25187 out.go:177] * [default-k8s-diff-port-267000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:08:20.074693   25187 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:08:20.074725   25187 notify.go:220] Checking for updates...
	I0729 05:08:20.091653   25187 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:08:20.095641   25187 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:08:20.098632   25187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:08:20.101591   25187 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:08:20.104676   25187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:08:20.108021   25187 config.go:182] Loaded profile config "default-k8s-diff-port-267000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:08:20.108284   25187 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:08:20.112637   25187 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 05:08:20.119686   25187 start.go:297] selected driver: qemu2
	I0729 05:08:20.119693   25187 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-267000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-267000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:08:20.119746   25187 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:08:20.122148   25187 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 05:08:20.122178   25187 cni.go:84] Creating CNI manager for ""
	I0729 05:08:20.122188   25187 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:08:20.122222   25187 start.go:340] cluster config:
	{Name:default-k8s-diff-port-267000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-267000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:08:20.125769   25187 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:08:20.133625   25187 out.go:177] * Starting "default-k8s-diff-port-267000" primary control-plane node in "default-k8s-diff-port-267000" cluster
	I0729 05:08:20.137644   25187 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:08:20.137661   25187 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 05:08:20.137673   25187 cache.go:56] Caching tarball of preloaded images
	I0729 05:08:20.137733   25187 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:08:20.137741   25187 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:08:20.137818   25187 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/default-k8s-diff-port-267000/config.json ...
	I0729 05:08:20.138286   25187 start.go:360] acquireMachinesLock for default-k8s-diff-port-267000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:08:20.138316   25187 start.go:364] duration metric: took 23.333µs to acquireMachinesLock for "default-k8s-diff-port-267000"
	I0729 05:08:20.138325   25187 start.go:96] Skipping create...Using existing machine configuration
	I0729 05:08:20.138331   25187 fix.go:54] fixHost starting: 
	I0729 05:08:20.138449   25187 fix.go:112] recreateIfNeeded on default-k8s-diff-port-267000: state=Stopped err=<nil>
	W0729 05:08:20.138457   25187 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 05:08:20.142728   25187 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-267000" ...
	I0729 05:08:20.150625   25187 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:08:20.150661   25187 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:13:5c:d0:d1:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2
	I0729 05:08:20.152722   25187 main.go:141] libmachine: STDOUT: 
	I0729 05:08:20.152742   25187 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:08:20.152771   25187 fix.go:56] duration metric: took 14.44125ms for fixHost
	I0729 05:08:20.152775   25187 start.go:83] releasing machines lock for "default-k8s-diff-port-267000", held for 14.455584ms
	W0729 05:08:20.152782   25187 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:08:20.152820   25187 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:08:20.152824   25187 start.go:729] Will try again in 5 seconds ...
	I0729 05:08:25.154985   25187 start.go:360] acquireMachinesLock for default-k8s-diff-port-267000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:08:25.155414   25187 start.go:364] duration metric: took 305.583µs to acquireMachinesLock for "default-k8s-diff-port-267000"
	I0729 05:08:25.155529   25187 start.go:96] Skipping create...Using existing machine configuration
	I0729 05:08:25.155549   25187 fix.go:54] fixHost starting: 
	I0729 05:08:25.156342   25187 fix.go:112] recreateIfNeeded on default-k8s-diff-port-267000: state=Stopped err=<nil>
	W0729 05:08:25.156371   25187 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 05:08:25.161672   25187 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-267000" ...
	I0729 05:08:25.165826   25187 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:08:25.166062   25187 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:13:5c:d0:d1:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/default-k8s-diff-port-267000/disk.qcow2
	I0729 05:08:25.175241   25187 main.go:141] libmachine: STDOUT: 
	I0729 05:08:25.175297   25187 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:08:25.175374   25187 fix.go:56] duration metric: took 19.826458ms for fixHost
	I0729 05:08:25.175388   25187 start.go:83] releasing machines lock for "default-k8s-diff-port-267000", held for 19.951791ms
	W0729 05:08:25.175533   25187 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-267000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-267000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:08:25.183734   25187 out.go:177] 
	W0729 05:08:25.186894   25187 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:08:25.186917   25187 out.go:239] * 
	* 
	W0729 05:08:25.189515   25187 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:08:25.196767   25187 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-267000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000: exit status 7 (66.448917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-267000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-577000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-577000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.183665917s)

                                                
                                                
-- stdout --
	* [newest-cni-577000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-577000" primary control-plane node in "newest-cni-577000" cluster
	* Restarting existing qemu2 VM for "newest-cni-577000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-577000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:08:23.163060   25215 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:08:23.163191   25215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:23.163194   25215 out.go:304] Setting ErrFile to fd 2...
	I0729 05:08:23.163196   25215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:23.163318   25215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:08:23.164333   25215 out.go:298] Setting JSON to false
	I0729 05:08:23.180758   25215 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11272,"bootTime":1722243631,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 05:08:23.180820   25215 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:08:23.185308   25215 out.go:177] * [newest-cni-577000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 05:08:23.193267   25215 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 05:08:23.193315   25215 notify.go:220] Checking for updates...
	I0729 05:08:23.201245   25215 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 05:08:23.204268   25215 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 05:08:23.205688   25215 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:08:23.209272   25215 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 05:08:23.212285   25215 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:08:23.215537   25215 config.go:182] Loaded profile config "newest-cni-577000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 05:08:23.215793   25215 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:08:23.219189   25215 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 05:08:23.226240   25215 start.go:297] selected driver: qemu2
	I0729 05:08:23.226247   25215 start.go:901] validating driver "qemu2" against &{Name:newest-cni-577000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-577000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:08:23.226297   25215 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:08:23.228609   25215 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 05:08:23.228644   25215 cni.go:84] Creating CNI manager for ""
	I0729 05:08:23.228652   25215 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:08:23.228675   25215 start.go:340] cluster config:
	{Name:newest-cni-577000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-577000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:08:23.232481   25215 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:08:23.241242   25215 out.go:177] * Starting "newest-cni-577000" primary control-plane node in "newest-cni-577000" cluster
	I0729 05:08:23.245218   25215 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 05:08:23.245231   25215 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 05:08:23.245239   25215 cache.go:56] Caching tarball of preloaded images
	I0729 05:08:23.245298   25215 preload.go:172] Found /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 05:08:23.245303   25215 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 05:08:23.245361   25215 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/newest-cni-577000/config.json ...
	I0729 05:08:23.245837   25215 start.go:360] acquireMachinesLock for newest-cni-577000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:08:23.245866   25215 start.go:364] duration metric: took 22.625µs to acquireMachinesLock for "newest-cni-577000"
	I0729 05:08:23.245876   25215 start.go:96] Skipping create...Using existing machine configuration
	I0729 05:08:23.245882   25215 fix.go:54] fixHost starting: 
	I0729 05:08:23.246005   25215 fix.go:112] recreateIfNeeded on newest-cni-577000: state=Stopped err=<nil>
	W0729 05:08:23.246015   25215 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 05:08:23.249298   25215 out.go:177] * Restarting existing qemu2 VM for "newest-cni-577000" ...
	I0729 05:08:23.257180   25215 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:08:23.257218   25215 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:31:3a:b4:ba:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2
	I0729 05:08:23.259323   25215 main.go:141] libmachine: STDOUT: 
	I0729 05:08:23.259347   25215 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:08:23.259379   25215 fix.go:56] duration metric: took 13.496ms for fixHost
	I0729 05:08:23.259384   25215 start.go:83] releasing machines lock for "newest-cni-577000", held for 13.514458ms
	W0729 05:08:23.259390   25215 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:08:23.259427   25215 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:08:23.259432   25215 start.go:729] Will try again in 5 seconds ...
	I0729 05:08:28.261566   25215 start.go:360] acquireMachinesLock for newest-cni-577000: {Name:mkdbd944ce6a9fcba5673eff3baf88cc3e4d4b5e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 05:08:28.262221   25215 start.go:364] duration metric: took 523.875µs to acquireMachinesLock for "newest-cni-577000"
	I0729 05:08:28.262367   25215 start.go:96] Skipping create...Using existing machine configuration
	I0729 05:08:28.262390   25215 fix.go:54] fixHost starting: 
	I0729 05:08:28.263150   25215 fix.go:112] recreateIfNeeded on newest-cni-577000: state=Stopped err=<nil>
	W0729 05:08:28.263178   25215 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 05:08:28.271493   25215 out.go:177] * Restarting existing qemu2 VM for "newest-cni-577000" ...
	I0729 05:08:28.275542   25215 qemu.go:418] Using hvf for hardware acceleration
	I0729 05:08:28.275846   25215 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:31:3a:b4:ba:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19338-21024/.minikube/machines/newest-cni-577000/disk.qcow2
	I0729 05:08:28.285765   25215 main.go:141] libmachine: STDOUT: 
	I0729 05:08:28.285837   25215 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 05:08:28.285918   25215 fix.go:56] duration metric: took 23.531083ms for fixHost
	I0729 05:08:28.285938   25215 start.go:83] releasing machines lock for "newest-cni-577000", held for 23.694292ms
	W0729 05:08:28.286139   25215 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-577000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-577000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 05:08:28.293567   25215 out.go:177] 
	W0729 05:08:28.296582   25215 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 05:08:28.296604   25215 out.go:239] * 
	* 
	W0729 05:08:28.299114   25215 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 05:08:28.306604   25215 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-577000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-577000 -n newest-cni-577000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-577000 -n newest-cni-577000: exit status 7 (72.819666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-577000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-267000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000: exit status 7 (32.209333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-267000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-267000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-267000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-267000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.717583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-267000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-267000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000: exit status 7 (29.228209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-267000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-267000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000: exit status 7 (29.193542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-267000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-267000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-267000 --alsologtostderr -v=1: exit status 83 (39.250166ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-267000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-267000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:08:25.463109   25237 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:08:25.463280   25237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:25.463283   25237 out.go:304] Setting ErrFile to fd 2...
	I0729 05:08:25.463285   25237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:25.463455   25237 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:08:25.463666   25237 out.go:298] Setting JSON to false
	I0729 05:08:25.463672   25237 mustload.go:65] Loading cluster: default-k8s-diff-port-267000
	I0729 05:08:25.463861   25237 config.go:182] Loaded profile config "default-k8s-diff-port-267000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:08:25.467846   25237 out.go:177] * The control-plane node default-k8s-diff-port-267000 host is not running: state=Stopped
	I0729 05:08:25.470810   25237 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-267000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-267000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000: exit status 7 (28.2715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-267000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000: exit status 7 (28.658791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-267000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-577000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-577000 -n newest-cni-577000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-577000 -n newest-cni-577000: exit status 7 (29.693125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-577000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-577000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-577000 --alsologtostderr -v=1: exit status 83 (40.877167ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-577000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-577000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:08:28.495920   25261 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:08:28.496059   25261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:28.496062   25261 out.go:304] Setting ErrFile to fd 2...
	I0729 05:08:28.496064   25261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:08:28.496212   25261 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 05:08:28.496429   25261 out.go:298] Setting JSON to false
	I0729 05:08:28.496435   25261 mustload.go:65] Loading cluster: newest-cni-577000
	I0729 05:08:28.496625   25261 config.go:182] Loaded profile config "newest-cni-577000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 05:08:28.501224   25261 out.go:177] * The control-plane node newest-cni-577000 host is not running: state=Stopped
	I0729 05:08:28.504182   25261 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-577000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-577000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-577000 -n newest-cni-577000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-577000 -n newest-cni-577000: exit status 7 (29.643917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-577000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-577000 -n newest-cni-577000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-577000 -n newest-cni-577000: exit status 7 (29.316833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-577000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 6.46
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.11
21 TestDownloadOnly/v1.31.0-beta.0/json-events 6.27
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.11
30 TestBinaryMirror 0.28
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 10.42
48 TestErrorSpam/start 0.38
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 8.55
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.73
64 TestFunctional/serial/CacheCmd/cache/add_local 1.08
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.21
80 TestFunctional/parallel/DryRun 0.24
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.09
102 TestFunctional/parallel/License 0.29
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 1.7
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
135 TestFunctional/parallel/ProfileCmd/profile_list 0.08
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 1.9
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.2
202 TestMainNoArgs 0.03
247 TestStoppedBinaryUpgrade/Setup 1.19
249 TestStoppedBinaryUpgrade/MinikubeLogs 0.76
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
267 TestNoKubernetes/serial/ProfileList 0.1
268 TestNoKubernetes/serial/Stop 1.93
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
284 TestStartStop/group/old-k8s-version/serial/Stop 3.65
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
295 TestStartStop/group/no-preload/serial/Stop 2.91
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
306 TestStartStop/group/embed-certs/serial/Stop 3.14
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.53
320 TestStartStop/group/newest-cni/serial/DeployApp 0
321 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
322 TestStartStop/group/newest-cni/serial/Stop 3.81
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-008000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-008000: exit status 85 (96.353541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:42 PDT |          |
	|         | -p download-only-008000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 04:42:49
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 04:42:49.260854   21510 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:42:49.261004   21510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:49.261008   21510 out.go:304] Setting ErrFile to fd 2...
	I0729 04:42:49.261010   21510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:49.261147   21510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	W0729 04:42:49.261231   21510 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19338-21024/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19338-21024/.minikube/config/config.json: no such file or directory
	I0729 04:42:49.262550   21510 out.go:298] Setting JSON to true
	I0729 04:42:49.279423   21510 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9738,"bootTime":1722243631,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:42:49.279549   21510 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:42:49.284816   21510 out.go:97] [download-only-008000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:42:49.284954   21510 notify.go:220] Checking for updates...
	W0729 04:42:49.285047   21510 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 04:42:49.290390   21510 out.go:169] MINIKUBE_LOCATION=19338
	I0729 04:42:49.293860   21510 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:42:49.298155   21510 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:42:49.304061   21510 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:42:49.307778   21510 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	W0729 04:42:49.314330   21510 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 04:42:49.314521   21510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:42:49.318267   21510 out.go:97] Using the qemu2 driver based on user configuration
	I0729 04:42:49.318285   21510 start.go:297] selected driver: qemu2
	I0729 04:42:49.318308   21510 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:42:49.318366   21510 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:42:49.321994   21510 out.go:169] Automatically selected the socket_vmnet network
	I0729 04:42:49.327319   21510 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 04:42:49.327412   21510 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:42:49.327439   21510 cni.go:84] Creating CNI manager for ""
	I0729 04:42:49.327456   21510 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 04:42:49.327503   21510 start.go:340] cluster config:
	{Name:download-only-008000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-008000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:42:49.331552   21510 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:42:49.335837   21510 out.go:97] Downloading VM boot image ...
	I0729 04:42:49.335857   21510 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 04:42:54.126274   21510 out.go:97] Starting "download-only-008000" primary control-plane node in "download-only-008000" cluster
	I0729 04:42:54.126300   21510 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:42:54.181104   21510 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:42:54.181113   21510 cache.go:56] Caching tarball of preloaded images
	I0729 04:42:54.181258   21510 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:42:54.185913   21510 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 04:42:54.185919   21510 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:42:54.267924   21510 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:42:59.536462   21510 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:42:59.536631   21510 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:43:00.231838   21510 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 04:43:00.232047   21510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/download-only-008000/config.json ...
	I0729 04:43:00.232066   21510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/download-only-008000/config.json: {Name:mk8824f391d26486e3a1ec3bdb264ebdb1b0c69b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:43:00.233133   21510 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:43:00.233465   21510 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 04:43:00.750273   21510 out.go:169] 
	W0729 04:43:00.755253   21510 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19338-21024/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104ffda60 0x104ffda60 0x104ffda60 0x104ffda60 0x104ffda60 0x104ffda60 0x104ffda60] Decompressors:map[bz2:0x140007d1470 gz:0x140007d1478 tar:0x140007d1420 tar.bz2:0x140007d1430 tar.gz:0x140007d1440 tar.xz:0x140007d1450 tar.zst:0x140007d1460 tbz2:0x140007d1430 tgz:0x140007d1440 txz:0x140007d1450 tzst:0x140007d1460 xz:0x140007d1480 zip:0x140007d1490 zst:0x140007d1488] Getters:map[file:0x1400069a0c0 http:0x14000b24320 https:0x14000b24370] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 04:43:00.755280   21510 out_reason.go:110] 
	W0729 04:43:00.762307   21510 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:43:00.766185   21510 out.go:169] 
	
	
	* The control-plane node download-only-008000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-008000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-008000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (6.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-106000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-106000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (6.4584445s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (6.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-106000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-106000: exit status 85 (79.795542ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:42 PDT |                     |
	|         | -p download-only-008000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| delete  | -p download-only-008000        | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| start   | -o=json --download-only        | download-only-106000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | -p download-only-106000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 04:43:01
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 04:43:01.188127   21534 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:43:01.188274   21534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:43:01.188278   21534 out.go:304] Setting ErrFile to fd 2...
	I0729 04:43:01.188280   21534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:43:01.188419   21534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:43:01.189492   21534 out.go:298] Setting JSON to true
	I0729 04:43:01.207610   21534 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9750,"bootTime":1722243631,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:43:01.207681   21534 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:43:01.212250   21534 out.go:97] [download-only-106000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:43:01.212367   21534 notify.go:220] Checking for updates...
	I0729 04:43:01.216142   21534 out.go:169] MINIKUBE_LOCATION=19338
	I0729 04:43:01.219223   21534 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:43:01.222173   21534 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:43:01.225202   21534 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:43:01.228248   21534 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	W0729 04:43:01.233151   21534 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 04:43:01.233351   21534 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:43:01.236155   21534 out.go:97] Using the qemu2 driver based on user configuration
	I0729 04:43:01.236164   21534 start.go:297] selected driver: qemu2
	I0729 04:43:01.236168   21534 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:43:01.236227   21534 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:43:01.239253   21534 out.go:169] Automatically selected the socket_vmnet network
	I0729 04:43:01.244345   21534 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 04:43:01.244436   21534 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:43:01.244453   21534 cni.go:84] Creating CNI manager for ""
	I0729 04:43:01.244461   21534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:43:01.244466   21534 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:43:01.244502   21534 start.go:340] cluster config:
	{Name:download-only-106000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-106000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:43:01.248165   21534 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:43:01.251165   21534 out.go:97] Starting "download-only-106000" primary control-plane node in "download-only-106000" cluster
	I0729 04:43:01.251177   21534 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:43:01.314282   21534 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:43:01.314292   21534 cache.go:56] Caching tarball of preloaded images
	I0729 04:43:01.314448   21534 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:43:01.318431   21534 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 04:43:01.318438   21534 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:43:01.401842   21534 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:43:05.524812   21534 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:43:05.524996   21534 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:43:06.068504   21534 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:43:06.068704   21534 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/download-only-106000/config.json ...
	I0729 04:43:06.068720   21534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/download-only-106000/config.json: {Name:mkd6dc331f6d9caacb1a10e9e79ff9037174aab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:43:06.070269   21534 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:43:06.070581   21534 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-106000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-106000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-106000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (6.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-942000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-942000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (6.267122875s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (6.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-942000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-942000: exit status 85 (80.496334ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:42 PDT |                     |
	|         | -p download-only-008000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| delete  | -p download-only-008000             | download-only-008000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| start   | -o=json --download-only             | download-only-106000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | -p download-only-106000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| delete  | -p download-only-106000             | download-only-106000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT | 29 Jul 24 04:43 PDT |
	| start   | -o=json --download-only             | download-only-942000 | jenkins | v1.33.1 | 29 Jul 24 04:43 PDT |                     |
	|         | -p download-only-942000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 04:43:07
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 04:43:07.941443   21556 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:43:07.941574   21556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:43:07.941578   21556 out.go:304] Setting ErrFile to fd 2...
	I0729 04:43:07.941580   21556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:43:07.941720   21556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:43:07.942853   21556 out.go:298] Setting JSON to true
	I0729 04:43:07.959103   21556 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9756,"bootTime":1722243631,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:43:07.959166   21556 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:43:07.964075   21556 out.go:97] [download-only-942000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:43:07.964204   21556 notify.go:220] Checking for updates...
	I0729 04:43:07.967975   21556 out.go:169] MINIKUBE_LOCATION=19338
	I0729 04:43:07.971190   21556 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:43:07.972790   21556 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:43:07.976044   21556 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:43:07.979029   21556 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	W0729 04:43:07.985031   21556 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 04:43:07.985168   21556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:43:07.987994   21556 out.go:97] Using the qemu2 driver based on user configuration
	I0729 04:43:07.988003   21556 start.go:297] selected driver: qemu2
	I0729 04:43:07.988007   21556 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:43:07.988046   21556 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:43:07.991060   21556 out.go:169] Automatically selected the socket_vmnet network
	I0729 04:43:07.996344   21556 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 04:43:07.996443   21556 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:43:07.996459   21556 cni.go:84] Creating CNI manager for ""
	I0729 04:43:07.996468   21556 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:43:07.996473   21556 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:43:07.996511   21556 start.go:340] cluster config:
	{Name:download-only-942000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-942000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:43:07.999969   21556 iso.go:125] acquiring lock: {Name:mk9056ecf3fb7996c84d1b897fe5b9e4b392b364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:43:08.003084   21556 out.go:97] Starting "download-only-942000" primary control-plane node in "download-only-942000" cluster
	I0729 04:43:08.003093   21556 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:43:08.056984   21556 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 04:43:08.056995   21556 cache.go:56] Caching tarball of preloaded images
	I0729 04:43:08.057821   21556 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:43:08.061309   21556 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 04:43:08.061317   21556 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:43:08.136233   21556 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 04:43:12.041953   21556 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:43:12.042127   21556 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:43:12.561071   21556 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 04:43:12.561260   21556 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/download-only-942000/config.json ...
	I0729 04:43:12.561276   21556 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-21024/.minikube/profiles/download-only-942000/config.json: {Name:mk1f9f7e58abdf49a11da46a51233de10ca2b197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:43:12.561525   21556 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:43:12.562363   21556 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19338-21024/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-942000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-942000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-942000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.28s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-188000 --alsologtostderr --binary-mirror http://127.0.0.1:53903 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-188000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-188000
--- PASS: TestBinaryMirror (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-338000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-338000: exit status 85 (61.440208ms)

                                                
                                                
-- stdout --
	* Profile "addons-338000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-338000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-338000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-338000: exit status 85 (57.608833ms)

                                                
                                                
-- stdout --
	* Profile "addons-338000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-338000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.42s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 status: exit status 7 (30.93775ms)

                                                
                                                
-- stdout --
	nospam-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 status: exit status 7 (29.1665ms)

                                                
                                                
-- stdout --
	nospam-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 status: exit status 7 (29.800667ms)

                                                
                                                
-- stdout --
	nospam-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 pause: exit status 83 (41.059666ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-862000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-862000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 pause: exit status 83 (37.954084ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-862000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-862000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 pause: exit status 83 (37.711834ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-862000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-862000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 unpause: exit status 83 (39.713667ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-862000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-862000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 unpause: exit status 83 (38.810541ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-862000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-862000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 unpause: exit status 83 (38.839958ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-862000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-862000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (8.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 stop: (3.138176791s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 stop: (3.480336667s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-862000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-862000 stop: (1.929240875s)
--- PASS: TestErrorSpam/stop (8.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19338-21024/.minikube/files/etc/test/nested/copy/21508/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-051000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3194213350/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 cache add minikube-local-cache-test:functional-051000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 cache delete minikube-local-cache-test:functional-051000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-051000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 config get cpus: exit status 14 (29.9575ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 config get cpus: exit status 14 (29.3695ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-051000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-051000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.764708ms)

                                                
                                                
-- stdout --
	* [functional-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:44:54.286767   22082 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:44:54.286898   22082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:44:54.286905   22082 out.go:304] Setting ErrFile to fd 2...
	I0729 04:44:54.286907   22082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:44:54.287025   22082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:44:54.288007   22082 out.go:298] Setting JSON to false
	I0729 04:44:54.304373   22082 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9863,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:44:54.304441   22082 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:44:54.307971   22082 out.go:177] * [functional-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:44:54.315299   22082 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:44:54.315358   22082 notify.go:220] Checking for updates...
	I0729 04:44:54.322201   22082 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:44:54.326098   22082 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:44:54.329188   22082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:44:54.332188   22082 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:44:54.335205   22082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:44:54.338583   22082 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:44:54.338837   22082 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:44:54.343142   22082 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:44:54.350145   22082 start.go:297] selected driver: qemu2
	I0729 04:44:54.350153   22082 start.go:901] validating driver "qemu2" against &{Name:functional-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:44:54.350218   22082 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:44:54.356237   22082 out.go:177] 
	W0729 04:44:54.360100   22082 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 04:44:54.364177   22082 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-051000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-051000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-051000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.618292ms)

                                                
                                                
-- stdout --
	* [functional-051000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:44:54.172088   22078 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:44:54.172188   22078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:44:54.172190   22078 out.go:304] Setting ErrFile to fd 2...
	I0729 04:44:54.172193   22078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:44:54.172320   22078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-21024/.minikube/bin
	I0729 04:44:54.173723   22078 out.go:298] Setting JSON to false
	I0729 04:44:54.190676   22078 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9863,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:44:54.190764   22078 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:44:54.195376   22078 out.go:177] * [functional-051000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0729 04:44:54.202226   22078 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 04:44:54.202280   22078 notify.go:220] Checking for updates...
	I0729 04:44:54.209023   22078 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	I0729 04:44:54.212214   22078 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:44:54.215180   22078 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:44:54.218206   22078 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	I0729 04:44:54.221171   22078 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:44:54.224481   22078 config.go:182] Loaded profile config "functional-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:44:54.224769   22078 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:44:54.229242   22078 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0729 04:44:54.236178   22078 start.go:297] selected driver: qemu2
	I0729 04:44:54.236184   22078 start.go:901] validating driver "qemu2" against &{Name:functional-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:44:54.236257   22078 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:44:54.242221   22078 out.go:177] 
	W0729 04:44:54.246194   22078 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 04:44:54.249208   22078 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.667819833s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-051000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-051000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image rm docker.io/kicbase/echo-server:functional-051000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-051000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 image save --daemon docker.io/kicbase/echo-server:functional-051000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-051000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "46.713125ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.92325ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "44.902958ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.692875ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.011918917s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-051000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-051000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-051000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-051000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-594000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-594000 --output=json --user=testUser: (1.897768292s)
--- PASS: TestJSONOutput/stop/Command (1.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-970000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-970000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.939ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1dc2d483-8614-4c1b-8ed0-09ae061c3665","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-970000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fec40f75-f75a-4986-8705-4b52a8dc3615","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19338"}}
	{"specversion":"1.0","id":"aa17a90d-349b-41fa-9582-b9a81a087fb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig"}}
	{"specversion":"1.0","id":"ed818426-34c4-46f7-af41-6591062411f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"26c546aa-690a-4d3b-8223-c2a5c1593e56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a8d7efaf-8f01-4856-9fea-4fafe59049bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube"}}
	{"specversion":"1.0","id":"1652f6c1-440b-4fad-bd19-fb0fec06c722","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"52ee7a8a-5ac9-4ff0-96b7-8fa4df9a994b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-970000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-970000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-370000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.10125ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-21024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-21024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-911000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-911000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (49.470166ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-911000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-911000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-911000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-911000: (1.930124667s)
--- PASS: TestNoKubernetes/serial/Stop (1.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-911000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-911000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.394292ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-911000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-911000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-051000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-051000 --alsologtostderr -v=3: (3.6461705s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-051000 -n old-k8s-version-051000: exit status 7 (56.853042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-051000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-354000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-354000 --alsologtostderr -v=3: (2.911156875s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-354000 -n no-preload-354000: exit status 7 (58.221958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-354000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-128000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-128000 --alsologtostderr -v=3: (3.142078667s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-128000 -n embed-certs-128000: exit status 7 (55.936792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-128000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-267000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-267000 --alsologtostderr -v=3: (3.526728959s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-577000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-577000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-577000 --alsologtostderr -v=3: (3.814026334s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-267000 -n default-k8s-diff-port-267000: exit status 7 (55.538083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-267000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-577000 -n newest-cni-577000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-577000 -n newest-cni-577000: exit status 7 (55.296417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-577000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-051000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2472417868/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722253455239052000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2472417868/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722253455239052000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2472417868/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722253455239052000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2472417868/001/test-1722253455239052000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (52.480917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.050666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.745667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.539584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.138875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.370958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.129375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "sudo umount -f /mount-9p": exit status 83 (47.437ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-051000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-051000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2472417868/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (14.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-051000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port769903843/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (63.093834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.945333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.329125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.945292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.133916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.254333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.746333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.145458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "sudo umount -f /mount-9p": exit status 83 (48.744667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-051000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-051000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port769903843/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (14.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-051000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2492609515/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-051000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2492609515/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-051000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2492609515/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1: exit status 83 (84.658125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1: exit status 83 (86.85ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1: exit status 83 (88.503875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1: exit status 83 (87.925292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1: exit status 83 (84.047541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1: exit status 83 (87.656542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-051000 ssh "findmnt -T" /mount1: exit status 83 (85.251834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-051000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-051000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2492609515/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-051000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2492609515/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-051000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2492609515/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.31s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-394000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-394000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-394000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-394000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394000"

                                                
                                                
----------------------- debugLogs end: cilium-394000 [took: 2.203778792s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-394000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-394000
--- SKIP: TestNetworkPlugins/group/cilium (2.31s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-177000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-177000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard