Test Report: QEMU_macOS 19184

                    
                      3e3b94e96544f72da351cd649c60e3a6cb2f9512:2024-07-02:35156
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.78
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.89
27 TestAddons/Setup 10.2
28 TestCertOptions 10.1
29 TestCertExpiration 195.12
30 TestDockerFlags 9.88
31 TestForceSystemdFlag 10
32 TestForceSystemdEnv 9.95
38 TestErrorSpam/setup 9.92
47 TestFunctional/serial/StartWithProxy 9.96
49 TestFunctional/serial/SoftStart 5.25
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
61 TestFunctional/serial/MinikubeKubectlCmd 0.63
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.96
63 TestFunctional/serial/ExtraConfig 5.25
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.26
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.28
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
102 TestFunctional/parallel/DockerEnv/bash 0.04
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.05
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
110 TestFunctional/parallel/ServiceCmd/Format 0.05
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 94.1
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.35
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.33
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.56
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 39.69
141 TestMultiControlPlane/serial/StartCluster 10.19
142 TestMultiControlPlane/serial/DeployApp 110.91
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
150 TestMultiControlPlane/serial/RestartSecondaryNode 43.32
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.07
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.97
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.07
155 TestMultiControlPlane/serial/StopCluster 3.41
156 TestMultiControlPlane/serial/RestartCluster 5.26
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
162 TestImageBuild/serial/Setup 9.9
165 TestJSONOutput/start/Command 9.67
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.04
194 TestMinikubeProfile 9.99
197 TestMountStart/serial/StartWithMountFirst 10.12
200 TestMultiNode/serial/FreshStart2Nodes 9.9
201 TestMultiNode/serial/DeployApp2Nodes 96.46
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.08
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 54.08
209 TestMultiNode/serial/RestartKeepsNodes 8.83
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.06
212 TestMultiNode/serial/RestartMultiNode 5.25
213 TestMultiNode/serial/ValidateNameConflict 19.94
217 TestPreload 9.96
219 TestScheduledStopUnix 10.02
220 TestSkaffold 11.95
223 TestRunningBinaryUpgrade 606.4
225 TestKubernetesUpgrade 17.08
235 TestPause/serial/Start 26.3
238 TestNoKubernetes/serial/StartWithK8s 9.81
239 TestNoKubernetes/serial/StartWithStopK8s 5.28
240 TestNoKubernetes/serial/Start 5.28
244 TestNoKubernetes/serial/StartNoArgs 5.27
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.57
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.4
260 TestStoppedBinaryUpgrade/Upgrade 573.14
262 TestStartStop/group/old-k8s-version/serial/FirstStart 9.95
263 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
264 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
267 TestStartStop/group/old-k8s-version/serial/SecondStart 5.21
268 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
269 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
270 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
271 TestStartStop/group/old-k8s-version/serial/Pause 0.1
273 TestStartStop/group/no-preload/serial/FirstStart 10.02
274 TestStartStop/group/no-preload/serial/DeployApp 0.09
275 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
278 TestStartStop/group/no-preload/serial/SecondStart 5.23
279 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
280 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
281 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
282 TestStartStop/group/no-preload/serial/Pause 0.1
284 TestStartStop/group/embed-certs/serial/FirstStart 9.84
285 TestStartStop/group/embed-certs/serial/DeployApp 0.09
286 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
289 TestStartStop/group/embed-certs/serial/SecondStart 5.25
290 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
291 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
292 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
293 TestStartStop/group/embed-certs/serial/Pause 0.1
295 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.78
296 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
300 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.21
301 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
302 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
303 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
304 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
306 TestStartStop/group/newest-cni/serial/FirstStart 9.73
311 TestStartStop/group/newest-cni/serial/SecondStart 5.22
314 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
315 TestStartStop/group/newest-cni/serial/Pause 0.11
316 TestNetworkPlugins/group/auto/Start 9.83
317 TestNetworkPlugins/group/calico/Start 9.86
318 TestNetworkPlugins/group/custom-flannel/Start 9.83
319 TestNetworkPlugins/group/false/Start 9.67
320 TestNetworkPlugins/group/kindnet/Start 9.71
321 TestNetworkPlugins/group/flannel/Start 9.72
322 TestNetworkPlugins/group/enable-default-cni/Start 9.84
323 TestNetworkPlugins/group/bridge/Start 9.85
324 TestNetworkPlugins/group/kubenet/Start 9.77
x
+
TestDownloadOnly/v1.20.0/json-events (13.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-617000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-617000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.775417625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d9af7d0d-17b5-4e62-b0d2-cd136b5320e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-617000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"835ed9e3-62d4-4465-bd0f-4682ae71e07f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19184"}}
	{"specversion":"1.0","id":"e1463fb2-ba39-44c8-9418-1b1399c5917a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig"}}
	{"specversion":"1.0","id":"64c08bbf-5a3f-4e7f-bcc2-24464c77b3fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"76745a5e-9226-4b93-89c2-7f1ae26cfbf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a0d695bd-37a8-454a-88ea-c068032ec900","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube"}}
	{"specversion":"1.0","id":"550c1c91-a880-4694-a0f7-afaceb5dab7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"8ca924b0-c9fb-4928-8511-23a39146e2b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5422b381-57e3-4235-a92a-bc1ed35ee630","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"39110d3c-ba50-4baa-a494-c088880c9f78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1e226270-d057-4eb3-9350-cdce25131e73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-617000\" primary control-plane node in \"download-only-617000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"780e8b57-c6ef-48c7-aa2e-dcb0c76b7dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c727dc4-274e-45ca-bbed-63a11e12df86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1084a1a20 0x1084a1a20 0x1084a1a20 0x1084a1a20 0x1084a1a20 0x1084a1a20 0x1084a1a20] Decompressors:map[bz2:0x1400048f690 gz:0x1400048f698 tar:0x1400048f640 tar.bz2:0x1400048f650 tar.gz:0x1400048f660 tar.xz:0x1400048f670 tar.zst:0x1400048f680 tbz2:0x1400048f650 tgz:0x14
00048f660 txz:0x1400048f670 tzst:0x1400048f680 xz:0x1400048f6a0 zip:0x1400048f6b0 zst:0x1400048f6a8] Getters:map[file:0x140013885e0 http:0x14000884230 https:0x14000884280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"d371e111-15d6-4e24-8a30-1aac193c9546","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:18:31.387813    6671 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:18:31.387959    6671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:18:31.387963    6671 out.go:304] Setting ErrFile to fd 2...
	I0702 21:18:31.387966    6671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:18:31.388077    6671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	W0702 21:18:31.388157    6671 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19184-6175/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19184-6175/.minikube/config/config.json: no such file or directory
	I0702 21:18:31.389507    6671 out.go:298] Setting JSON to true
	I0702 21:18:31.407613    6671 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4680,"bootTime":1719975631,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:18:31.407713    6671 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:18:31.412600    6671 out.go:97] [download-only-617000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:18:31.412759    6671 notify.go:220] Checking for updates...
	W0702 21:18:31.412793    6671 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball: no such file or directory
	I0702 21:18:31.415490    6671 out.go:169] MINIKUBE_LOCATION=19184
	I0702 21:18:31.418449    6671 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:18:31.422520    6671 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:18:31.425891    6671 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:18:31.428506    6671 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	W0702 21:18:31.435559    6671 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0702 21:18:31.435833    6671 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:18:31.439442    6671 out.go:97] Using the qemu2 driver based on user configuration
	I0702 21:18:31.439460    6671 start.go:297] selected driver: qemu2
	I0702 21:18:31.439486    6671 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:18:31.439539    6671 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:18:31.442538    6671 out.go:169] Automatically selected the socket_vmnet network
	I0702 21:18:31.448003    6671 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0702 21:18:31.448102    6671 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0702 21:18:31.448161    6671 cni.go:84] Creating CNI manager for ""
	I0702 21:18:31.448179    6671 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0702 21:18:31.448236    6671 start.go:340] cluster config:
	{Name:download-only-617000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:18:31.452181    6671 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:18:31.457308    6671 out.go:97] Downloading VM boot image ...
	I0702 21:18:31.457341    6671 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso
	I0702 21:18:36.669440    6671 out.go:97] Starting "download-only-617000" primary control-plane node in "download-only-617000" cluster
	I0702 21:18:36.669472    6671 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0702 21:18:36.728897    6671 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0702 21:18:36.728905    6671 cache.go:56] Caching tarball of preloaded images
	I0702 21:18:36.729071    6671 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0702 21:18:36.735975    6671 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0702 21:18:36.735980    6671 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0702 21:18:36.810092    6671 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0702 21:18:44.002497    6671 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0702 21:18:44.002657    6671 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0702 21:18:44.698814    6671 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0702 21:18:44.699018    6671 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/download-only-617000/config.json ...
	I0702 21:18:44.699036    6671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/download-only-617000/config.json: {Name:mke1e04db6842554434f52e29a26f088d8c718f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:18:44.700151    6671 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0702 21:18:44.700348    6671 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0702 21:18:45.085326    6671 out.go:169] 
	W0702 21:18:45.088332    6671 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1084a1a20 0x1084a1a20 0x1084a1a20 0x1084a1a20 0x1084a1a20 0x1084a1a20 0x1084a1a20] Decompressors:map[bz2:0x1400048f690 gz:0x1400048f698 tar:0x1400048f640 tar.bz2:0x1400048f650 tar.gz:0x1400048f660 tar.xz:0x1400048f670 tar.zst:0x1400048f680 tbz2:0x1400048f650 tgz:0x1400048f660 txz:0x1400048f670 tzst:0x1400048f680 xz:0x1400048f6a0 zip:0x1400048f6b0 zst:0x1400048f6a8] Getters:map[file:0x140013885e0 http:0x14000884230 https:0x14000884280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0702 21:18:45.088356    6671 out_reason.go:110] 
	W0702 21:18:45.098323    6671 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:18:45.102216    6671 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-617000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (13.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.89s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-942000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-942000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.739709291s)

                                                
                                                
-- stdout --
	* [offline-docker-942000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-942000" primary control-plane node in "offline-docker-942000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-942000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:30:08.773234    8198 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:30:08.773376    8198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:30:08.773381    8198 out.go:304] Setting ErrFile to fd 2...
	I0702 21:30:08.773384    8198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:30:08.773514    8198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:30:08.774584    8198 out.go:298] Setting JSON to false
	I0702 21:30:08.790617    8198 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5377,"bootTime":1719975631,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:30:08.790693    8198 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:30:08.795817    8198 out.go:177] * [offline-docker-942000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:30:08.805862    8198 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:30:08.805922    8198 notify.go:220] Checking for updates...
	I0702 21:30:08.812772    8198 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:30:08.815834    8198 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:30:08.818811    8198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:30:08.821762    8198 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:30:08.824802    8198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:30:08.828112    8198 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:30:08.828164    8198 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:30:08.832746    8198 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:30:08.839721    8198 start.go:297] selected driver: qemu2
	I0702 21:30:08.839727    8198 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:30:08.839732    8198 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:30:08.841656    8198 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:30:08.844801    8198 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:30:08.847846    8198 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:30:08.847877    8198 cni.go:84] Creating CNI manager for ""
	I0702 21:30:08.847885    8198 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:30:08.847888    8198 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:30:08.847925    8198 start.go:340] cluster config:
	{Name:offline-docker-942000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-942000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:30:08.851765    8198 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:30:08.861790    8198 out.go:177] * Starting "offline-docker-942000" primary control-plane node in "offline-docker-942000" cluster
	I0702 21:30:08.865599    8198 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:30:08.865613    8198 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:30:08.865619    8198 cache.go:56] Caching tarball of preloaded images
	I0702 21:30:08.865679    8198 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:30:08.865685    8198 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:30:08.865763    8198 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/offline-docker-942000/config.json ...
	I0702 21:30:08.865775    8198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/offline-docker-942000/config.json: {Name:mk217159e6d2ce5796306b68b2f694b79e7077f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:30:08.866113    8198 start.go:360] acquireMachinesLock for offline-docker-942000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:30:08.866153    8198 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "offline-docker-942000"
	I0702 21:30:08.866170    8198 start.go:93] Provisioning new machine with config: &{Name:offline-docker-942000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-942000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:30:08.866209    8198 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:30:08.874636    8198 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0702 21:30:08.893564    8198 start.go:159] libmachine.API.Create for "offline-docker-942000" (driver="qemu2")
	I0702 21:30:08.893590    8198 client.go:168] LocalClient.Create starting
	I0702 21:30:08.893667    8198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:30:08.893702    8198 main.go:141] libmachine: Decoding PEM data...
	I0702 21:30:08.893718    8198 main.go:141] libmachine: Parsing certificate...
	I0702 21:30:08.893766    8198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:30:08.893793    8198 main.go:141] libmachine: Decoding PEM data...
	I0702 21:30:08.893802    8198 main.go:141] libmachine: Parsing certificate...
	I0702 21:30:08.894275    8198 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:30:09.023269    8198 main.go:141] libmachine: Creating SSH key...
	I0702 21:30:09.051661    8198 main.go:141] libmachine: Creating Disk image...
	I0702 21:30:09.051666    8198 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:30:09.051853    8198 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/disk.qcow2
	I0702 21:30:09.061147    8198 main.go:141] libmachine: STDOUT: 
	I0702 21:30:09.061165    8198 main.go:141] libmachine: STDERR: 
	I0702 21:30:09.061220    8198 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/disk.qcow2 +20000M
	I0702 21:30:09.069361    8198 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:30:09.069377    8198 main.go:141] libmachine: STDERR: 
	I0702 21:30:09.069392    8198 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/disk.qcow2
	I0702 21:30:09.069396    8198 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:30:09.069423    8198 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:53:eb:ff:5f:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/disk.qcow2
	I0702 21:30:09.071032    8198 main.go:141] libmachine: STDOUT: 
	I0702 21:30:09.071047    8198 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:30:09.071068    8198 client.go:171] duration metric: took 177.47675ms to LocalClient.Create
	I0702 21:30:11.073210    8198 start.go:128] duration metric: took 2.207017417s to createHost
	I0702 21:30:11.073298    8198 start.go:83] releasing machines lock for "offline-docker-942000", held for 2.207181292s
	W0702 21:30:11.073335    8198 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:30:11.080050    8198 out.go:177] * Deleting "offline-docker-942000" in qemu2 ...
	W0702 21:30:11.101725    8198 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:30:11.101760    8198 start.go:728] Will try again in 5 seconds ...
	I0702 21:30:16.102617    8198 start.go:360] acquireMachinesLock for offline-docker-942000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:30:16.103370    8198 start.go:364] duration metric: took 631.125µs to acquireMachinesLock for "offline-docker-942000"
	I0702 21:30:16.103538    8198 start.go:93] Provisioning new machine with config: &{Name:offline-docker-942000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-942000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:30:16.103885    8198 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:30:16.111837    8198 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0702 21:30:16.154600    8198 start.go:159] libmachine.API.Create for "offline-docker-942000" (driver="qemu2")
	I0702 21:30:16.154655    8198 client.go:168] LocalClient.Create starting
	I0702 21:30:16.154770    8198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:30:16.154831    8198 main.go:141] libmachine: Decoding PEM data...
	I0702 21:30:16.154851    8198 main.go:141] libmachine: Parsing certificate...
	I0702 21:30:16.154933    8198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:30:16.154978    8198 main.go:141] libmachine: Decoding PEM data...
	I0702 21:30:16.154987    8198 main.go:141] libmachine: Parsing certificate...
	I0702 21:30:16.155445    8198 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:30:16.351793    8198 main.go:141] libmachine: Creating SSH key...
	I0702 21:30:16.418436    8198 main.go:141] libmachine: Creating Disk image...
	I0702 21:30:16.418442    8198 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:30:16.418615    8198 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/disk.qcow2
	I0702 21:30:16.428650    8198 main.go:141] libmachine: STDOUT: 
	I0702 21:30:16.428670    8198 main.go:141] libmachine: STDERR: 
	I0702 21:30:16.428745    8198 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/disk.qcow2 +20000M
	I0702 21:30:16.436975    8198 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:30:16.436990    8198 main.go:141] libmachine: STDERR: 
	I0702 21:30:16.437001    8198 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/disk.qcow2
	I0702 21:30:16.437007    8198 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:30:16.437047    8198 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:ff:9b:48:0e:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/offline-docker-942000/disk.qcow2
	I0702 21:30:16.438617    8198 main.go:141] libmachine: STDOUT: 
	I0702 21:30:16.438633    8198 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:30:16.438645    8198 client.go:171] duration metric: took 283.989542ms to LocalClient.Create
	I0702 21:30:18.440853    8198 start.go:128] duration metric: took 2.336974833s to createHost
	I0702 21:30:18.440955    8198 start.go:83] releasing machines lock for "offline-docker-942000", held for 2.337567167s
	W0702 21:30:18.441315    8198 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-942000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-942000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:30:18.452937    8198 out.go:177] 
	W0702 21:30:18.456853    8198 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:30:18.456972    8198 out.go:239] * 
	* 
	W0702 21:30:18.460214    8198 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:30:18.470842    8198 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-942000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-02 21:30:18.485594 -0700 PDT m=+707.193999585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-942000 -n offline-docker-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-942000 -n offline-docker-942000: exit status 7 (66.8765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-942000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-942000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-942000
--- FAIL: TestOffline (9.89s)

                                                
                                    
x
+
TestAddons/Setup (10.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-066000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-066000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.194962333s)

                                                
                                                
-- stdout --
	* [addons-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-066000" primary control-plane node in "addons-066000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-066000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:18:56.541250    6752 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:18:56.541379    6752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:18:56.541384    6752 out.go:304] Setting ErrFile to fd 2...
	I0702 21:18:56.541386    6752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:18:56.541498    6752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:18:56.542569    6752 out.go:298] Setting JSON to false
	I0702 21:18:56.558502    6752 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4705,"bootTime":1719975631,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:18:56.558572    6752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:18:56.561889    6752 out.go:177] * [addons-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:18:56.564854    6752 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:18:56.564914    6752 notify.go:220] Checking for updates...
	I0702 21:18:56.570805    6752 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:18:56.573793    6752 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:18:56.576825    6752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:18:56.578232    6752 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:18:56.581820    6752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:18:56.584970    6752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:18:56.588564    6752 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:18:56.595840    6752 start.go:297] selected driver: qemu2
	I0702 21:18:56.595848    6752 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:18:56.595857    6752 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:18:56.597992    6752 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:18:56.600859    6752 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:18:56.603815    6752 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:18:56.603858    6752 cni.go:84] Creating CNI manager for ""
	I0702 21:18:56.603864    6752 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:18:56.603868    6752 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:18:56.603889    6752 start.go:340] cluster config:
	{Name:addons-066000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:18:56.607257    6752 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:18:56.614825    6752 out.go:177] * Starting "addons-066000" primary control-plane node in "addons-066000" cluster
	I0702 21:18:56.618748    6752 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:18:56.618778    6752 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:18:56.618786    6752 cache.go:56] Caching tarball of preloaded images
	I0702 21:18:56.618851    6752 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:18:56.618857    6752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:18:56.619063    6752 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/addons-066000/config.json ...
	I0702 21:18:56.619086    6752 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/addons-066000/config.json: {Name:mkcae7d12b1cf6f6233c7a47c43230e9267cb1e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:18:56.619449    6752 start.go:360] acquireMachinesLock for addons-066000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:18:56.619512    6752 start.go:364] duration metric: took 57.208µs to acquireMachinesLock for "addons-066000"
	I0702 21:18:56.619528    6752 start.go:93] Provisioning new machine with config: &{Name:addons-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:addons-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:18:56.619560    6752 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:18:56.627832    6752 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0702 21:18:56.646951    6752 start.go:159] libmachine.API.Create for "addons-066000" (driver="qemu2")
	I0702 21:18:56.646981    6752 client.go:168] LocalClient.Create starting
	I0702 21:18:56.647127    6752 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:18:56.792318    6752 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:18:57.057791    6752 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:18:57.229618    6752 main.go:141] libmachine: Creating SSH key...
	I0702 21:18:57.301123    6752 main.go:141] libmachine: Creating Disk image...
	I0702 21:18:57.301130    6752 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:18:57.301316    6752 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/disk.qcow2
	I0702 21:18:57.310643    6752 main.go:141] libmachine: STDOUT: 
	I0702 21:18:57.310665    6752 main.go:141] libmachine: STDERR: 
	I0702 21:18:57.310726    6752 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/disk.qcow2 +20000M
	I0702 21:18:57.318661    6752 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:18:57.318672    6752 main.go:141] libmachine: STDERR: 
	I0702 21:18:57.318683    6752 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/disk.qcow2
	I0702 21:18:57.318689    6752 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:18:57.318739    6752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:a8:4e:d8:f4:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/disk.qcow2
	I0702 21:18:57.320310    6752 main.go:141] libmachine: STDOUT: 
	I0702 21:18:57.320325    6752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:18:57.320343    6752 client.go:171] duration metric: took 673.3695ms to LocalClient.Create
	I0702 21:18:59.322482    6752 start.go:128] duration metric: took 2.702952416s to createHost
	I0702 21:18:59.322545    6752 start.go:83] releasing machines lock for "addons-066000", held for 2.703071166s
	W0702 21:18:59.322628    6752 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:18:59.331088    6752 out.go:177] * Deleting "addons-066000" in qemu2 ...
	W0702 21:18:59.354401    6752 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:18:59.354465    6752 start.go:728] Will try again in 5 seconds ...
	I0702 21:19:04.356573    6752 start.go:360] acquireMachinesLock for addons-066000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:19:04.356973    6752 start.go:364] duration metric: took 322.666µs to acquireMachinesLock for "addons-066000"
	I0702 21:19:04.357090    6752 start.go:93] Provisioning new machine with config: &{Name:addons-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:addons-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:19:04.357399    6752 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:19:04.368103    6752 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0702 21:19:04.421315    6752 start.go:159] libmachine.API.Create for "addons-066000" (driver="qemu2")
	I0702 21:19:04.421355    6752 client.go:168] LocalClient.Create starting
	I0702 21:19:04.421461    6752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:19:04.421519    6752 main.go:141] libmachine: Decoding PEM data...
	I0702 21:19:04.421541    6752 main.go:141] libmachine: Parsing certificate...
	I0702 21:19:04.421648    6752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:19:04.421696    6752 main.go:141] libmachine: Decoding PEM data...
	I0702 21:19:04.421711    6752 main.go:141] libmachine: Parsing certificate...
	I0702 21:19:04.422221    6752 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:19:04.563594    6752 main.go:141] libmachine: Creating SSH key...
	I0702 21:19:04.649364    6752 main.go:141] libmachine: Creating Disk image...
	I0702 21:19:04.649370    6752 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:19:04.649548    6752 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/disk.qcow2
	I0702 21:19:04.658443    6752 main.go:141] libmachine: STDOUT: 
	I0702 21:19:04.658460    6752 main.go:141] libmachine: STDERR: 
	I0702 21:19:04.658508    6752 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/disk.qcow2 +20000M
	I0702 21:19:04.666374    6752 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:19:04.666392    6752 main.go:141] libmachine: STDERR: 
	I0702 21:19:04.666402    6752 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/disk.qcow2
	I0702 21:19:04.666405    6752 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:19:04.666440    6752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:a8:46:46:70:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/addons-066000/disk.qcow2
	I0702 21:19:04.668014    6752 main.go:141] libmachine: STDOUT: 
	I0702 21:19:04.668030    6752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:19:04.668045    6752 client.go:171] duration metric: took 246.690917ms to LocalClient.Create
	I0702 21:19:06.668626    6752 start.go:128] duration metric: took 2.31124625s to createHost
	I0702 21:19:06.668672    6752 start.go:83] releasing machines lock for "addons-066000", held for 2.311717166s
	W0702 21:19:06.669037    6752 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:19:06.678619    6752 out.go:177] 
	W0702 21:19:06.682737    6752 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:19:06.682785    6752 out.go:239] * 
	* 
	W0702 21:19:06.685337    6752 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:19:06.693359    6752 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-066000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.20s)

                                                
                                    
x
+
TestCertOptions (10.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-775000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-775000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.865872458s)

                                                
                                                
-- stdout --
	* [cert-options-775000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-775000" primary control-plane node in "cert-options-775000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-775000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-775000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-775000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-775000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (59.575833ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-775000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-775000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-775000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-775000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters:\n\t- cluster:\n\t    certificate-authority: /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Tue, 02 Jul 2024 21:35:38 PDT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.33.1\n\t      name: cluster_info\n\t    server: https://10.0.2.15:8443\n\t  name: running-upgrade-908000\n\tcontexts:\n\t- context:\n\t    cluster: running-upgrade-908000\n\t    extensions:\n\t    - extension:\n\t        last-update: Tue, 02 Jul 2024 21:35:38 PDT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.33.1\n\t      name: context_info\n\t    namespace: default\n\t    user: running-upgrade-908000\n\t  name: running-upgrade-908000\n\tcurrent-context: running-upgrade-908000\n\tkind: Config\n\tpreferences: {}\n\tusers:\n\t- name: running-upgrade-908000\n\t  user:\n\t    client-certificate: /Users/jenkins/minikube-integration/19184-6175/.minikube/pro
files/running-upgrade-908000/client.crt\n\t    client-key: /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/client.key\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-775000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-775000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.1585ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-775000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-775000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-775000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-775000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-775000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-02 21:35:40.087685 -0700 PDT m=+1028.800932668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-775000 -n cert-options-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-775000 -n cert-options-775000: exit status 7 (29.149458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-775000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-775000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-775000
--- FAIL: TestCertOptions (10.10s)

                                                
                                    
x
+
TestCertExpiration (195.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-826000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-826000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.76062s)

                                                
                                                
-- stdout --
	* [cert-expiration-826000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-826000" primary control-plane node in "cert-expiration-826000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-826000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-826000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-826000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-826000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.221898167s)

                                                
                                                
-- stdout --
	* [cert-expiration-826000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-826000" primary control-plane node in "cert-expiration-826000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-826000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-826000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-826000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-826000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-826000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-826000" primary control-plane node in "cert-expiration-826000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-826000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-826000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-826000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-02 21:35:29.955168 -0700 PDT m=+1018.668469668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-826000 -n cert-expiration-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-826000 -n cert-expiration-826000: exit status 7 (61.554875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-826000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-826000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-826000
--- FAIL: TestCertExpiration (195.12s)

                                                
                                    
x
+
TestDockerFlags (9.88s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-414000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-414000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.67707575s)

                                                
                                                
-- stdout --
	* [docker-flags-414000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-414000" primary control-plane node in "docker-flags-414000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-414000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:32:05.117247    8622 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:32:05.117380    8622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:32:05.117384    8622 out.go:304] Setting ErrFile to fd 2...
	I0702 21:32:05.117386    8622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:32:05.117522    8622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:32:05.118615    8622 out.go:298] Setting JSON to false
	I0702 21:32:05.135208    8622 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5494,"bootTime":1719975631,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:32:05.135295    8622 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:32:05.140451    8622 out.go:177] * [docker-flags-414000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:32:05.147364    8622 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:32:05.147466    8622 notify.go:220] Checking for updates...
	I0702 21:32:05.153510    8622 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:32:05.154942    8622 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:32:05.158485    8622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:32:05.161478    8622 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:32:05.164491    8622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:32:05.167869    8622 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:32:05.167950    8622 config.go:182] Loaded profile config "running-upgrade-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:32:05.167998    8622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:32:05.171385    8622 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:32:05.178477    8622 start.go:297] selected driver: qemu2
	I0702 21:32:05.178484    8622 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:32:05.178490    8622 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:32:05.180611    8622 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:32:05.183536    8622 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:32:05.186526    8622 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0702 21:32:05.186544    8622 cni.go:84] Creating CNI manager for ""
	I0702 21:32:05.186551    8622 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:32:05.186555    8622 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:32:05.186583    8622 start.go:340] cluster config:
	{Name:docker-flags-414000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:32:05.189935    8622 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:32:05.198473    8622 out.go:177] * Starting "docker-flags-414000" primary control-plane node in "docker-flags-414000" cluster
	I0702 21:32:05.201357    8622 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:32:05.201370    8622 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:32:05.201375    8622 cache.go:56] Caching tarball of preloaded images
	I0702 21:32:05.201431    8622 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:32:05.201436    8622 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:32:05.201485    8622 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/docker-flags-414000/config.json ...
	I0702 21:32:05.201495    8622 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/docker-flags-414000/config.json: {Name:mk876a9831486d45064091201d84ec4746a87ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:32:05.201740    8622 start.go:360] acquireMachinesLock for docker-flags-414000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:32:05.201770    8622 start.go:364] duration metric: took 24.084µs to acquireMachinesLock for "docker-flags-414000"
	I0702 21:32:05.201782    8622 start.go:93] Provisioning new machine with config: &{Name:docker-flags-414000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:32:05.201814    8622 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:32:05.206486    8622 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0702 21:32:05.222348    8622 start.go:159] libmachine.API.Create for "docker-flags-414000" (driver="qemu2")
	I0702 21:32:05.222392    8622 client.go:168] LocalClient.Create starting
	I0702 21:32:05.222468    8622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:32:05.222497    8622 main.go:141] libmachine: Decoding PEM data...
	I0702 21:32:05.222505    8622 main.go:141] libmachine: Parsing certificate...
	I0702 21:32:05.222546    8622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:32:05.222580    8622 main.go:141] libmachine: Decoding PEM data...
	I0702 21:32:05.222586    8622 main.go:141] libmachine: Parsing certificate...
	I0702 21:32:05.222972    8622 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:32:05.352152    8622 main.go:141] libmachine: Creating SSH key...
	I0702 21:32:05.378750    8622 main.go:141] libmachine: Creating Disk image...
	I0702 21:32:05.378755    8622 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:32:05.378918    8622 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/disk.qcow2
	I0702 21:32:05.388418    8622 main.go:141] libmachine: STDOUT: 
	I0702 21:32:05.388435    8622 main.go:141] libmachine: STDERR: 
	I0702 21:32:05.388479    8622 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/disk.qcow2 +20000M
	I0702 21:32:05.396355    8622 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:32:05.396372    8622 main.go:141] libmachine: STDERR: 
	I0702 21:32:05.396384    8622 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/disk.qcow2
	I0702 21:32:05.396389    8622 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:32:05.396414    8622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:92:7b:b7:0a:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/disk.qcow2
	I0702 21:32:05.398103    8622 main.go:141] libmachine: STDOUT: 
	I0702 21:32:05.398117    8622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:32:05.398137    8622 client.go:171] duration metric: took 175.744333ms to LocalClient.Create
	I0702 21:32:07.400262    8622 start.go:128] duration metric: took 2.198478458s to createHost
	I0702 21:32:07.400295    8622 start.go:83] releasing machines lock for "docker-flags-414000", held for 2.198562333s
	W0702 21:32:07.400337    8622 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:32:07.409518    8622 out.go:177] * Deleting "docker-flags-414000" in qemu2 ...
	W0702 21:32:07.429523    8622 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:32:07.429544    8622 start.go:728] Will try again in 5 seconds ...
	I0702 21:32:12.431596    8622 start.go:360] acquireMachinesLock for docker-flags-414000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:32:12.432031    8622 start.go:364] duration metric: took 361.875µs to acquireMachinesLock for "docker-flags-414000"
	I0702 21:32:12.432089    8622 start.go:93] Provisioning new machine with config: &{Name:docker-flags-414000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:32:12.432302    8622 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:32:12.441943    8622 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0702 21:32:12.489451    8622 start.go:159] libmachine.API.Create for "docker-flags-414000" (driver="qemu2")
	I0702 21:32:12.489504    8622 client.go:168] LocalClient.Create starting
	I0702 21:32:12.489663    8622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:32:12.489728    8622 main.go:141] libmachine: Decoding PEM data...
	I0702 21:32:12.489762    8622 main.go:141] libmachine: Parsing certificate...
	I0702 21:32:12.489823    8622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:32:12.489868    8622 main.go:141] libmachine: Decoding PEM data...
	I0702 21:32:12.489917    8622 main.go:141] libmachine: Parsing certificate...
	I0702 21:32:12.490449    8622 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:32:12.632236    8622 main.go:141] libmachine: Creating SSH key...
	I0702 21:32:12.710872    8622 main.go:141] libmachine: Creating Disk image...
	I0702 21:32:12.710879    8622 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:32:12.711075    8622 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/disk.qcow2
	I0702 21:32:12.720803    8622 main.go:141] libmachine: STDOUT: 
	I0702 21:32:12.720834    8622 main.go:141] libmachine: STDERR: 
	I0702 21:32:12.720891    8622 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/disk.qcow2 +20000M
	I0702 21:32:12.729099    8622 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:32:12.729115    8622 main.go:141] libmachine: STDERR: 
	I0702 21:32:12.729124    8622 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/disk.qcow2
	I0702 21:32:12.729142    8622 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:32:12.729174    8622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:ee:aa:65:b5:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/docker-flags-414000/disk.qcow2
	I0702 21:32:12.730763    8622 main.go:141] libmachine: STDOUT: 
	I0702 21:32:12.730780    8622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:32:12.730792    8622 client.go:171] duration metric: took 241.287125ms to LocalClient.Create
	I0702 21:32:14.732942    8622 start.go:128] duration metric: took 2.300634458s to createHost
	I0702 21:32:14.733011    8622 start.go:83] releasing machines lock for "docker-flags-414000", held for 2.301008959s
	W0702 21:32:14.733388    8622 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-414000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-414000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:32:14.741823    8622 out.go:177] 
	W0702 21:32:14.744746    8622 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:32:14.744762    8622 out.go:239] * 
	* 
	W0702 21:32:14.745858    8622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:32:14.756810    8622 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-414000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-414000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-414000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (58.957084ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-414000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-414000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-414000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-414000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-414000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-414000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-414000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-414000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-414000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (40.937334ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-414000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-414000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-414000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-414000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-414000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-414000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-02 21:32:14.868782 -0700 PDT m=+823.579521251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-414000 -n docker-flags-414000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-414000 -n docker-flags-414000: exit status 7 (29.970166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-414000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-414000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-414000
--- FAIL: TestDockerFlags (9.88s)

                                                
                                    
x
+
TestForceSystemdFlag (10s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-237000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-237000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.814043167s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-237000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-237000" primary control-plane node in "force-systemd-flag-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:31:55.121036    8595 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:31:55.121160    8595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:31:55.121164    8595 out.go:304] Setting ErrFile to fd 2...
	I0702 21:31:55.121167    8595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:31:55.121290    8595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:31:55.125971    8595 out.go:298] Setting JSON to false
	I0702 21:31:55.142723    8595 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5484,"bootTime":1719975631,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:31:55.142785    8595 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:31:55.147501    8595 out.go:177] * [force-systemd-flag-237000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:31:55.155482    8595 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:31:55.155573    8595 notify.go:220] Checking for updates...
	I0702 21:31:55.161387    8595 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:31:55.164453    8595 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:31:55.167514    8595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:31:55.170454    8595 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:31:55.173466    8595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:31:55.176770    8595 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:31:55.176831    8595 config.go:182] Loaded profile config "running-upgrade-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:31:55.176883    8595 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:31:55.185455    8595 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:31:55.188507    8595 start.go:297] selected driver: qemu2
	I0702 21:31:55.188513    8595 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:31:55.188519    8595 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:31:55.190688    8595 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:31:55.194400    8595 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:31:55.198527    8595 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0702 21:31:55.198541    8595 cni.go:84] Creating CNI manager for ""
	I0702 21:31:55.198548    8595 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:31:55.198555    8595 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:31:55.198579    8595 start.go:340] cluster config:
	{Name:force-systemd-flag-237000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:31:55.202076    8595 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:31:55.210440    8595 out.go:177] * Starting "force-systemd-flag-237000" primary control-plane node in "force-systemd-flag-237000" cluster
	I0702 21:31:55.214430    8595 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:31:55.214452    8595 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:31:55.214463    8595 cache.go:56] Caching tarball of preloaded images
	I0702 21:31:55.214520    8595 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:31:55.214525    8595 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:31:55.214572    8595 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/force-systemd-flag-237000/config.json ...
	I0702 21:31:55.214582    8595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/force-systemd-flag-237000/config.json: {Name:mk4d8f37be35fd5d46558f269352d35d837fc5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:31:55.214991    8595 start.go:360] acquireMachinesLock for force-systemd-flag-237000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:31:55.215029    8595 start.go:364] duration metric: took 29.292µs to acquireMachinesLock for "force-systemd-flag-237000"
	I0702 21:31:55.215041    8595 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:31:55.215067    8595 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:31:55.223411    8595 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0702 21:31:55.239599    8595 start.go:159] libmachine.API.Create for "force-systemd-flag-237000" (driver="qemu2")
	I0702 21:31:55.239634    8595 client.go:168] LocalClient.Create starting
	I0702 21:31:55.239706    8595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:31:55.239735    8595 main.go:141] libmachine: Decoding PEM data...
	I0702 21:31:55.239744    8595 main.go:141] libmachine: Parsing certificate...
	I0702 21:31:55.239785    8595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:31:55.239808    8595 main.go:141] libmachine: Decoding PEM data...
	I0702 21:31:55.239823    8595 main.go:141] libmachine: Parsing certificate...
	I0702 21:31:55.240200    8595 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:31:55.367587    8595 main.go:141] libmachine: Creating SSH key...
	I0702 21:31:55.464121    8595 main.go:141] libmachine: Creating Disk image...
	I0702 21:31:55.464127    8595 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:31:55.464305    8595 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/disk.qcow2
	I0702 21:31:55.473652    8595 main.go:141] libmachine: STDOUT: 
	I0702 21:31:55.473670    8595 main.go:141] libmachine: STDERR: 
	I0702 21:31:55.473720    8595 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/disk.qcow2 +20000M
	I0702 21:31:55.481574    8595 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:31:55.481587    8595 main.go:141] libmachine: STDERR: 
	I0702 21:31:55.481599    8595 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/disk.qcow2
	I0702 21:31:55.481606    8595 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:31:55.481633    8595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:0c:75:3d:ac:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/disk.qcow2
	I0702 21:31:55.483273    8595 main.go:141] libmachine: STDOUT: 
	I0702 21:31:55.483291    8595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:31:55.483315    8595 client.go:171] duration metric: took 243.679208ms to LocalClient.Create
	I0702 21:31:57.484760    8595 start.go:128] duration metric: took 2.269726333s to createHost
	I0702 21:31:57.484787    8595 start.go:83] releasing machines lock for "force-systemd-flag-237000", held for 2.269798417s
	W0702 21:31:57.484810    8595 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:31:57.490718    8595 out.go:177] * Deleting "force-systemd-flag-237000" in qemu2 ...
	W0702 21:31:57.506433    8595 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:31:57.506446    8595 start.go:728] Will try again in 5 seconds ...
	I0702 21:32:02.508585    8595 start.go:360] acquireMachinesLock for force-systemd-flag-237000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:32:02.509183    8595 start.go:364] duration metric: took 483.958µs to acquireMachinesLock for "force-systemd-flag-237000"
	I0702 21:32:02.509259    8595 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:32:02.509527    8595 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:32:02.519229    8595 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0702 21:32:02.568717    8595 start.go:159] libmachine.API.Create for "force-systemd-flag-237000" (driver="qemu2")
	I0702 21:32:02.568767    8595 client.go:168] LocalClient.Create starting
	I0702 21:32:02.568919    8595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:32:02.568985    8595 main.go:141] libmachine: Decoding PEM data...
	I0702 21:32:02.569001    8595 main.go:141] libmachine: Parsing certificate...
	I0702 21:32:02.569074    8595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:32:02.569118    8595 main.go:141] libmachine: Decoding PEM data...
	I0702 21:32:02.569141    8595 main.go:141] libmachine: Parsing certificate...
	I0702 21:32:02.569674    8595 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:32:02.713123    8595 main.go:141] libmachine: Creating SSH key...
	I0702 21:32:02.850611    8595 main.go:141] libmachine: Creating Disk image...
	I0702 21:32:02.850620    8595 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:32:02.850798    8595 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/disk.qcow2
	I0702 21:32:02.859900    8595 main.go:141] libmachine: STDOUT: 
	I0702 21:32:02.859921    8595 main.go:141] libmachine: STDERR: 
	I0702 21:32:02.859974    8595 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/disk.qcow2 +20000M
	I0702 21:32:02.867811    8595 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:32:02.867825    8595 main.go:141] libmachine: STDERR: 
	I0702 21:32:02.867837    8595 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/disk.qcow2
	I0702 21:32:02.867850    8595 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:32:02.867883    8595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:1c:8f:df:b2:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-flag-237000/disk.qcow2
	I0702 21:32:02.869510    8595 main.go:141] libmachine: STDOUT: 
	I0702 21:32:02.869525    8595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:32:02.869538    8595 client.go:171] duration metric: took 300.761791ms to LocalClient.Create
	I0702 21:32:04.871709    8595 start.go:128] duration metric: took 2.362186458s to createHost
	I0702 21:32:04.871794    8595 start.go:83] releasing machines lock for "force-systemd-flag-237000", held for 2.362629208s
	W0702 21:32:04.872168    8595 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:32:04.880681    8595 out.go:177] 
	W0702 21:32:04.885900    8595 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:32:04.885972    8595 out.go:239] * 
	* 
	W0702 21:32:04.888656    8595 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:32:04.896684    8595 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-237000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-237000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-237000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (72.006708ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-237000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-237000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-237000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-02 21:32:04.981562 -0700 PDT m=+813.692103043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-237000 -n force-systemd-flag-237000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-237000 -n force-systemd-flag-237000: exit status 7 (35.055792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-237000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-237000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-237000
--- FAIL: TestForceSystemdFlag (10.00s)

                                                
                                    
x
+
TestForceSystemdEnv (9.95s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-973000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-973000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.776564458s)

                                                
                                                
-- stdout --
	* [force-systemd-env-973000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-973000" primary control-plane node in "force-systemd-env-973000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-973000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:31:45.175683    8569 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:31:45.175816    8569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:31:45.175820    8569 out.go:304] Setting ErrFile to fd 2...
	I0702 21:31:45.175823    8569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:31:45.175936    8569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:31:45.177060    8569 out.go:298] Setting JSON to false
	I0702 21:31:45.194574    8569 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5474,"bootTime":1719975631,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:31:45.194640    8569 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:31:45.199496    8569 out.go:177] * [force-systemd-env-973000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:31:45.205379    8569 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:31:45.205504    8569 notify.go:220] Checking for updates...
	I0702 21:31:45.212479    8569 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:31:45.215419    8569 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:31:45.218510    8569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:31:45.221369    8569 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:31:45.224424    8569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0702 21:31:45.227807    8569 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:31:45.227871    8569 config.go:182] Loaded profile config "running-upgrade-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:31:45.227924    8569 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:31:45.231337    8569 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:31:45.238333    8569 start.go:297] selected driver: qemu2
	I0702 21:31:45.238342    8569 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:31:45.238348    8569 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:31:45.240746    8569 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:31:45.244391    8569 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:31:45.248532    8569 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0702 21:31:45.248547    8569 cni.go:84] Creating CNI manager for ""
	I0702 21:31:45.248554    8569 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:31:45.248558    8569 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:31:45.248587    8569 start.go:340] cluster config:
	{Name:force-systemd-env-973000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-973000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:31:45.252617    8569 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:31:45.256481    8569 out.go:177] * Starting "force-systemd-env-973000" primary control-plane node in "force-systemd-env-973000" cluster
	I0702 21:31:45.263414    8569 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:31:45.263438    8569 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:31:45.263449    8569 cache.go:56] Caching tarball of preloaded images
	I0702 21:31:45.263528    8569 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:31:45.263535    8569 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:31:45.263585    8569 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/force-systemd-env-973000/config.json ...
	I0702 21:31:45.263595    8569 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/force-systemd-env-973000/config.json: {Name:mk352daa19d5408863efd5aa288f1625c82bcb2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:31:45.263793    8569 start.go:360] acquireMachinesLock for force-systemd-env-973000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:31:45.263826    8569 start.go:364] duration metric: took 25.209µs to acquireMachinesLock for "force-systemd-env-973000"
	I0702 21:31:45.263838    8569 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-973000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-973000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:31:45.263867    8569 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:31:45.267442    8569 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0702 21:31:45.292888    8569 start.go:159] libmachine.API.Create for "force-systemd-env-973000" (driver="qemu2")
	I0702 21:31:45.292926    8569 client.go:168] LocalClient.Create starting
	I0702 21:31:45.293002    8569 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:31:45.293034    8569 main.go:141] libmachine: Decoding PEM data...
	I0702 21:31:45.293042    8569 main.go:141] libmachine: Parsing certificate...
	I0702 21:31:45.293086    8569 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:31:45.293109    8569 main.go:141] libmachine: Decoding PEM data...
	I0702 21:31:45.293117    8569 main.go:141] libmachine: Parsing certificate...
	I0702 21:31:45.293428    8569 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:31:45.424615    8569 main.go:141] libmachine: Creating SSH key...
	I0702 21:31:45.484796    8569 main.go:141] libmachine: Creating Disk image...
	I0702 21:31:45.484824    8569 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:31:45.485028    8569 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/disk.qcow2
	I0702 21:31:45.496105    8569 main.go:141] libmachine: STDOUT: 
	I0702 21:31:45.496121    8569 main.go:141] libmachine: STDERR: 
	I0702 21:31:45.496195    8569 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/disk.qcow2 +20000M
	I0702 21:31:45.505630    8569 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:31:45.505664    8569 main.go:141] libmachine: STDERR: 
	I0702 21:31:45.505677    8569 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/disk.qcow2
	I0702 21:31:45.505684    8569 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:31:45.505716    8569 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:83:42:d1:c6:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/disk.qcow2
	I0702 21:31:45.508324    8569 main.go:141] libmachine: STDOUT: 
	I0702 21:31:45.508345    8569 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:31:45.508373    8569 client.go:171] duration metric: took 215.443083ms to LocalClient.Create
	I0702 21:31:47.510637    8569 start.go:128] duration metric: took 2.246764541s to createHost
	I0702 21:31:47.510730    8569 start.go:83] releasing machines lock for "force-systemd-env-973000", held for 2.246940791s
	W0702 21:31:47.510780    8569 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:31:47.517886    8569 out.go:177] * Deleting "force-systemd-env-973000" in qemu2 ...
	W0702 21:31:47.540015    8569 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:31:47.540209    8569 start.go:728] Will try again in 5 seconds ...
	I0702 21:31:52.542300    8569 start.go:360] acquireMachinesLock for force-systemd-env-973000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:31:52.542811    8569 start.go:364] duration metric: took 404.667µs to acquireMachinesLock for "force-systemd-env-973000"
	I0702 21:31:52.542964    8569 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-973000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-973000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:31:52.543231    8569 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:31:52.553236    8569 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0702 21:31:52.602133    8569 start.go:159] libmachine.API.Create for "force-systemd-env-973000" (driver="qemu2")
	I0702 21:31:52.602187    8569 client.go:168] LocalClient.Create starting
	I0702 21:31:52.602299    8569 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:31:52.602368    8569 main.go:141] libmachine: Decoding PEM data...
	I0702 21:31:52.602385    8569 main.go:141] libmachine: Parsing certificate...
	I0702 21:31:52.602454    8569 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:31:52.602498    8569 main.go:141] libmachine: Decoding PEM data...
	I0702 21:31:52.602513    8569 main.go:141] libmachine: Parsing certificate...
	I0702 21:31:52.603033    8569 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:31:52.745133    8569 main.go:141] libmachine: Creating SSH key...
	I0702 21:31:52.864378    8569 main.go:141] libmachine: Creating Disk image...
	I0702 21:31:52.864385    8569 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:31:52.864557    8569 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/disk.qcow2
	I0702 21:31:52.873647    8569 main.go:141] libmachine: STDOUT: 
	I0702 21:31:52.873664    8569 main.go:141] libmachine: STDERR: 
	I0702 21:31:52.873717    8569 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/disk.qcow2 +20000M
	I0702 21:31:52.881787    8569 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:31:52.881799    8569 main.go:141] libmachine: STDERR: 
	I0702 21:31:52.881811    8569 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/disk.qcow2
	I0702 21:31:52.881817    8569 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:31:52.881852    8569 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:f7:e9:77:51:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/force-systemd-env-973000/disk.qcow2
	I0702 21:31:52.883454    8569 main.go:141] libmachine: STDOUT: 
	I0702 21:31:52.883467    8569 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:31:52.883479    8569 client.go:171] duration metric: took 281.290917ms to LocalClient.Create
	I0702 21:31:54.885650    8569 start.go:128] duration metric: took 2.342427s to createHost
	I0702 21:31:54.885737    8569 start.go:83] releasing machines lock for "force-systemd-env-973000", held for 2.342943125s
	W0702 21:31:54.886173    8569 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-973000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-973000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:31:54.896807    8569 out.go:177] 
	W0702 21:31:54.900918    8569 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:31:54.900953    8569 out.go:239] * 
	* 
	W0702 21:31:54.903908    8569 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:31:54.911873    8569 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-973000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-973000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-973000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (65.011333ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-973000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-973000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-973000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-02 21:31:54.990581 -0700 PDT m=+803.700921418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-973000 -n force-systemd-env-973000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-973000 -n force-systemd-env-973000: exit status 7 (32.528458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-973000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-973000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-973000
--- FAIL: TestForceSystemdEnv (9.95s)

                                                
                                    
x
+
TestErrorSpam/setup (9.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-331000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-331000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 --driver=qemu2 : exit status 80 (9.921270583s)

                                                
                                                
-- stdout --
	* [nospam-331000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-331000" primary control-plane node in "nospam-331000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-331000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-331000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-331000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-331000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-331000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19184
- KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-331000" primary control-plane node in "nospam-331000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-331000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-331000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.92s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-250000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-250000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.891492125s)

                                                
                                                
-- stdout --
	* [functional-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-250000" primary control-plane node in "functional-250000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-250000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51034 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51034 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51034 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-250000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-250000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19184
- KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-250000" primary control-plane node in "functional-250000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-250000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51034 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51034 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51034 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-250000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (68.873208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.96s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-250000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-250000 --alsologtostderr -v=8: exit status 80 (5.184243958s)

                                                
                                                
-- stdout --
	* [functional-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-250000" primary control-plane node in "functional-250000" cluster
	* Restarting existing qemu2 VM for "functional-250000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-250000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:19:36.878357    6902 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:19:36.878480    6902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:19:36.878483    6902 out.go:304] Setting ErrFile to fd 2...
	I0702 21:19:36.878486    6902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:19:36.878627    6902 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:19:36.879574    6902 out.go:298] Setting JSON to false
	I0702 21:19:36.895824    6902 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4745,"bootTime":1719975631,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:19:36.895915    6902 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:19:36.900473    6902 out.go:177] * [functional-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:19:36.907258    6902 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:19:36.907304    6902 notify.go:220] Checking for updates...
	I0702 21:19:36.914420    6902 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:19:36.915676    6902 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:19:36.918406    6902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:19:36.921397    6902 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:19:36.924389    6902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:19:36.927733    6902 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:19:36.927782    6902 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:19:36.932338    6902 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:19:36.939360    6902 start.go:297] selected driver: qemu2
	I0702 21:19:36.939368    6902 start.go:901] validating driver "qemu2" against &{Name:functional-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-250000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:19:36.939445    6902 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:19:36.941636    6902 cni.go:84] Creating CNI manager for ""
	I0702 21:19:36.941657    6902 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:19:36.941719    6902 start.go:340] cluster config:
	{Name:functional-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-250000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:19:36.945197    6902 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:19:36.952299    6902 out.go:177] * Starting "functional-250000" primary control-plane node in "functional-250000" cluster
	I0702 21:19:36.958355    6902 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:19:36.958370    6902 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:19:36.958381    6902 cache.go:56] Caching tarball of preloaded images
	I0702 21:19:36.958443    6902 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:19:36.958448    6902 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:19:36.958509    6902 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/functional-250000/config.json ...
	I0702 21:19:36.958896    6902 start.go:360] acquireMachinesLock for functional-250000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:19:36.958923    6902 start.go:364] duration metric: took 21.125µs to acquireMachinesLock for "functional-250000"
	I0702 21:19:36.958932    6902 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:19:36.958964    6902 fix.go:54] fixHost starting: 
	I0702 21:19:36.959074    6902 fix.go:112] recreateIfNeeded on functional-250000: state=Stopped err=<nil>
	W0702 21:19:36.959083    6902 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:19:36.966337    6902 out.go:177] * Restarting existing qemu2 VM for "functional-250000" ...
	I0702 21:19:36.970441    6902 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:81:54:58:18:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/disk.qcow2
	I0702 21:19:36.972490    6902 main.go:141] libmachine: STDOUT: 
	I0702 21:19:36.972508    6902 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:19:36.972535    6902 fix.go:56] duration metric: took 13.571291ms for fixHost
	I0702 21:19:36.972540    6902 start.go:83] releasing machines lock for "functional-250000", held for 13.613459ms
	W0702 21:19:36.972545    6902 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:19:36.972570    6902 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:19:36.972574    6902 start.go:728] Will try again in 5 seconds ...
	I0702 21:19:41.974634    6902 start.go:360] acquireMachinesLock for functional-250000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:19:41.975051    6902 start.go:364] duration metric: took 283.584µs to acquireMachinesLock for "functional-250000"
	I0702 21:19:41.975157    6902 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:19:41.975175    6902 fix.go:54] fixHost starting: 
	I0702 21:19:41.975901    6902 fix.go:112] recreateIfNeeded on functional-250000: state=Stopped err=<nil>
	W0702 21:19:41.975925    6902 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:19:41.983337    6902 out.go:177] * Restarting existing qemu2 VM for "functional-250000" ...
	I0702 21:19:41.987462    6902 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:81:54:58:18:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/disk.qcow2
	I0702 21:19:41.996379    6902 main.go:141] libmachine: STDOUT: 
	I0702 21:19:41.996431    6902 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:19:41.996496    6902 fix.go:56] duration metric: took 21.319ms for fixHost
	I0702 21:19:41.996512    6902 start.go:83] releasing machines lock for "functional-250000", held for 21.437833ms
	W0702 21:19:41.996665    6902 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-250000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-250000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:19:42.005346    6902 out.go:177] 
	W0702 21:19:42.009256    6902 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:19:42.009309    6902 out.go:239] * 
	* 
	W0702 21:19:42.011908    6902 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:19:42.019262    6902 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-250000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.185990667s for "functional-250000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (66.648875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.160542ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-250000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (29.594958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-250000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-250000 get po -A: exit status 1 (26.62575ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-250000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-250000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-250000\n"*: args "kubectl --context functional-250000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-250000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (29.816833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh sudo crictl images: exit status 83 (40.964208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-250000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (39.810583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-250000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.883625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.841041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-250000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 kubectl -- --context functional-250000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 kubectl -- --context functional-250000 get pods: exit status 1 (599.798792ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-250000
	* no server found for cluster "functional-250000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-250000 kubectl -- --context functional-250000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (31.127708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-250000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-250000 get pods: exit status 1 (927.382084ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-250000
	* no server found for cluster "functional-250000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-250000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (29.079416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.96s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-250000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-250000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.180311542s)

                                                
                                                
-- stdout --
	* [functional-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-250000" primary control-plane node in "functional-250000" cluster
	* Restarting existing qemu2 VM for "functional-250000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-250000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-250000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-250000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.180776166s for "functional-250000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (71.648667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-250000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-250000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.691417ms)

                                                
                                                
** stderr ** 
	error: context "functional-250000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-250000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (30.168416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 logs: exit status 83 (76.446666ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-617000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
	|         | -p download-only-617000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
	| delete  | -p download-only-617000                                                  | download-only-617000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
	| start   | -o=json --download-only                                                  | download-only-214000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
	|         | -p download-only-214000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
	| delete  | -p download-only-214000                                                  | download-only-214000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
	| delete  | -p download-only-617000                                                  | download-only-617000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
	| delete  | -p download-only-214000                                                  | download-only-214000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
	| start   | --download-only -p                                                       | binary-mirror-608000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
	|         | binary-mirror-608000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:50999                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-608000                                                  | binary-mirror-608000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
	| addons  | enable dashboard -p                                                      | addons-066000        | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
	|         | addons-066000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-066000        | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
	|         | addons-066000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-066000 --wait=true                                             | addons-066000        | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-066000                                                         | addons-066000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	| start   | -p nospam-331000 -n=1 --memory=2250 --wait=false                         | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-331000                                                         | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	| start   | -p functional-250000                                                     | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-250000                                                     | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-250000 cache add                                              | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-250000 cache add                                              | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-250000 cache add                                              | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-250000 cache add                                              | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	|         | minikube-local-cache-test:functional-250000                              |                      |         |         |                     |                     |
	| cache   | functional-250000 cache delete                                           | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	|         | minikube-local-cache-test:functional-250000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	| ssh     | functional-250000 ssh sudo                                               | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-250000                                                        | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-250000 ssh                                                    | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-250000 cache reload                                           | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	| ssh     | functional-250000 ssh                                                    | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-250000 kubectl --                                             | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | --context functional-250000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-250000                                                     | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/02 21:19:46
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0702 21:19:46.987356    6978 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:19:46.987470    6978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:19:46.987473    6978 out.go:304] Setting ErrFile to fd 2...
	I0702 21:19:46.987475    6978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:19:46.987589    6978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:19:46.988587    6978 out.go:298] Setting JSON to false
	I0702 21:19:47.004908    6978 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4755,"bootTime":1719975631,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:19:47.004964    6978 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:19:47.009264    6978 out.go:177] * [functional-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:19:47.018221    6978 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:19:47.018250    6978 notify.go:220] Checking for updates...
	I0702 21:19:47.023465    6978 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:19:47.026164    6978 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:19:47.029225    6978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:19:47.032174    6978 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:19:47.035149    6978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:19:47.038511    6978 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:19:47.038560    6978 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:19:47.043144    6978 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:19:47.050131    6978 start.go:297] selected driver: qemu2
	I0702 21:19:47.050135    6978 start.go:901] validating driver "qemu2" against &{Name:functional-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-250000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:19:47.050189    6978 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:19:47.052538    6978 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:19:47.052579    6978 cni.go:84] Creating CNI manager for ""
	I0702 21:19:47.052584    6978 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:19:47.052629    6978 start.go:340] cluster config:
	{Name:functional-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-250000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:19:47.056310    6978 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:19:47.063085    6978 out.go:177] * Starting "functional-250000" primary control-plane node in "functional-250000" cluster
	I0702 21:19:47.067062    6978 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:19:47.067074    6978 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:19:47.067081    6978 cache.go:56] Caching tarball of preloaded images
	I0702 21:19:47.067137    6978 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:19:47.067141    6978 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:19:47.067186    6978 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/functional-250000/config.json ...
	I0702 21:19:47.067635    6978 start.go:360] acquireMachinesLock for functional-250000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:19:47.067670    6978 start.go:364] duration metric: took 30.833µs to acquireMachinesLock for "functional-250000"
	I0702 21:19:47.067679    6978 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:19:47.067681    6978 fix.go:54] fixHost starting: 
	I0702 21:19:47.067804    6978 fix.go:112] recreateIfNeeded on functional-250000: state=Stopped err=<nil>
	W0702 21:19:47.067812    6978 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:19:47.076149    6978 out.go:177] * Restarting existing qemu2 VM for "functional-250000" ...
	I0702 21:19:47.079181    6978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:81:54:58:18:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/disk.qcow2
	I0702 21:19:47.081398    6978 main.go:141] libmachine: STDOUT: 
	I0702 21:19:47.081411    6978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:19:47.081441    6978 fix.go:56] duration metric: took 13.759208ms for fixHost
	I0702 21:19:47.081444    6978 start.go:83] releasing machines lock for "functional-250000", held for 13.77125ms
	W0702 21:19:47.081450    6978 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:19:47.081499    6978 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:19:47.081504    6978 start.go:728] Will try again in 5 seconds ...
	I0702 21:19:52.083587    6978 start.go:360] acquireMachinesLock for functional-250000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:19:52.084024    6978 start.go:364] duration metric: took 331.667µs to acquireMachinesLock for "functional-250000"
	I0702 21:19:52.084166    6978 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:19:52.084179    6978 fix.go:54] fixHost starting: 
	I0702 21:19:52.084946    6978 fix.go:112] recreateIfNeeded on functional-250000: state=Stopped err=<nil>
	W0702 21:19:52.084966    6978 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:19:52.088522    6978 out.go:177] * Restarting existing qemu2 VM for "functional-250000" ...
	I0702 21:19:52.097454    6978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:81:54:58:18:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/disk.qcow2
	I0702 21:19:52.106778    6978 main.go:141] libmachine: STDOUT: 
	I0702 21:19:52.106844    6978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:19:52.106974    6978 fix.go:56] duration metric: took 22.757375ms for fixHost
	I0702 21:19:52.106987    6978 start.go:83] releasing machines lock for "functional-250000", held for 22.949ms
	W0702 21:19:52.107172    6978 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-250000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:19:52.115265    6978 out.go:177] 
	W0702 21:19:52.119333    6978 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:19:52.119354    6978 out.go:239] * 
	W0702 21:19:52.121985    6978 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:19:52.127260    6978 out.go:177] 
	
	
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-250000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-617000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
|         | -p download-only-617000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| delete  | -p download-only-617000                                                  | download-only-617000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| start   | -o=json --download-only                                                  | download-only-214000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
|         | -p download-only-214000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| delete  | -p download-only-214000                                                  | download-only-214000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| delete  | -p download-only-617000                                                  | download-only-617000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| delete  | -p download-only-214000                                                  | download-only-214000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| start   | --download-only -p                                                       | binary-mirror-608000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
|         | binary-mirror-608000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50999                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-608000                                                  | binary-mirror-608000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| addons  | enable dashboard -p                                                      | addons-066000        | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
|         | addons-066000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-066000        | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
|         | addons-066000                                                            |                      |         |         |                     |                     |
| start   | -p addons-066000 --wait=true                                             | addons-066000        | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-066000                                                         | addons-066000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
| start   | -p nospam-331000 -n=1 --memory=2250 --wait=false                         | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-331000                                                         | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
| start   | -p functional-250000                                                     | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-250000                                                     | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-250000 cache add                                              | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-250000 cache add                                              | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-250000 cache add                                              | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-250000 cache add                                              | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | minikube-local-cache-test:functional-250000                              |                      |         |         |                     |                     |
| cache   | functional-250000 cache delete                                           | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | minikube-local-cache-test:functional-250000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
| ssh     | functional-250000 ssh sudo                                               | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-250000                                                        | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-250000 ssh                                                    | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-250000 cache reload                                           | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
| ssh     | functional-250000 ssh                                                    | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-250000 kubectl --                                             | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | --context functional-250000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-250000                                                     | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/02 21:19:46
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.4 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0702 21:19:46.987356    6978 out.go:291] Setting OutFile to fd 1 ...
I0702 21:19:46.987470    6978 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:19:46.987473    6978 out.go:304] Setting ErrFile to fd 2...
I0702 21:19:46.987475    6978 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:19:46.987589    6978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
I0702 21:19:46.988587    6978 out.go:298] Setting JSON to false
I0702 21:19:47.004908    6978 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4755,"bootTime":1719975631,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0702 21:19:47.004964    6978 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0702 21:19:47.009264    6978 out.go:177] * [functional-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0702 21:19:47.018221    6978 out.go:177]   - MINIKUBE_LOCATION=19184
I0702 21:19:47.018250    6978 notify.go:220] Checking for updates...
I0702 21:19:47.023465    6978 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
I0702 21:19:47.026164    6978 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0702 21:19:47.029225    6978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0702 21:19:47.032174    6978 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
I0702 21:19:47.035149    6978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0702 21:19:47.038511    6978 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0702 21:19:47.038560    6978 driver.go:392] Setting default libvirt URI to qemu:///system
I0702 21:19:47.043144    6978 out.go:177] * Using the qemu2 driver based on existing profile
I0702 21:19:47.050131    6978 start.go:297] selected driver: qemu2
I0702 21:19:47.050135    6978 start.go:901] validating driver "qemu2" against &{Name:functional-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:functional-250000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0702 21:19:47.050189    6978 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0702 21:19:47.052538    6978 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0702 21:19:47.052579    6978 cni.go:84] Creating CNI manager for ""
I0702 21:19:47.052584    6978 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0702 21:19:47.052629    6978 start.go:340] cluster config:
{Name:functional-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-250000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0702 21:19:47.056310    6978 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0702 21:19:47.063085    6978 out.go:177] * Starting "functional-250000" primary control-plane node in "functional-250000" cluster
I0702 21:19:47.067062    6978 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0702 21:19:47.067074    6978 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
I0702 21:19:47.067081    6978 cache.go:56] Caching tarball of preloaded images
I0702 21:19:47.067137    6978 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0702 21:19:47.067141    6978 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0702 21:19:47.067186    6978 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/functional-250000/config.json ...
I0702 21:19:47.067635    6978 start.go:360] acquireMachinesLock for functional-250000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0702 21:19:47.067670    6978 start.go:364] duration metric: took 30.833µs to acquireMachinesLock for "functional-250000"
I0702 21:19:47.067679    6978 start.go:96] Skipping create...Using existing machine configuration
I0702 21:19:47.067681    6978 fix.go:54] fixHost starting: 
I0702 21:19:47.067804    6978 fix.go:112] recreateIfNeeded on functional-250000: state=Stopped err=<nil>
W0702 21:19:47.067812    6978 fix.go:138] unexpected machine state, will restart: <nil>
I0702 21:19:47.076149    6978 out.go:177] * Restarting existing qemu2 VM for "functional-250000" ...
I0702 21:19:47.079181    6978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:81:54:58:18:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/disk.qcow2
I0702 21:19:47.081398    6978 main.go:141] libmachine: STDOUT: 
I0702 21:19:47.081411    6978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0702 21:19:47.081441    6978 fix.go:56] duration metric: took 13.759208ms for fixHost
I0702 21:19:47.081444    6978 start.go:83] releasing machines lock for "functional-250000", held for 13.77125ms
W0702 21:19:47.081450    6978 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0702 21:19:47.081499    6978 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0702 21:19:47.081504    6978 start.go:728] Will try again in 5 seconds ...
I0702 21:19:52.083587    6978 start.go:360] acquireMachinesLock for functional-250000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0702 21:19:52.084024    6978 start.go:364] duration metric: took 331.667µs to acquireMachinesLock for "functional-250000"
I0702 21:19:52.084166    6978 start.go:96] Skipping create...Using existing machine configuration
I0702 21:19:52.084179    6978 fix.go:54] fixHost starting: 
I0702 21:19:52.084946    6978 fix.go:112] recreateIfNeeded on functional-250000: state=Stopped err=<nil>
W0702 21:19:52.084966    6978 fix.go:138] unexpected machine state, will restart: <nil>
I0702 21:19:52.088522    6978 out.go:177] * Restarting existing qemu2 VM for "functional-250000" ...
I0702 21:19:52.097454    6978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:81:54:58:18:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/disk.qcow2
I0702 21:19:52.106778    6978 main.go:141] libmachine: STDOUT: 
I0702 21:19:52.106844    6978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0702 21:19:52.106974    6978 fix.go:56] duration metric: took 22.757375ms for fixHost
I0702 21:19:52.106987    6978 start.go:83] releasing machines lock for "functional-250000", held for 22.949ms
W0702 21:19:52.107172    6978 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-250000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0702 21:19:52.115265    6978 out.go:177] 
W0702 21:19:52.119333    6978 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0702 21:19:52.119354    6978 out.go:239] * 
W0702 21:19:52.121985    6978 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0702 21:19:52.127260    6978 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1733826808/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-617000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
|         | -p download-only-617000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| delete  | -p download-only-617000                                                  | download-only-617000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| start   | -o=json --download-only                                                  | download-only-214000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
|         | -p download-only-214000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| delete  | -p download-only-214000                                                  | download-only-214000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| delete  | -p download-only-617000                                                  | download-only-617000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| delete  | -p download-only-214000                                                  | download-only-214000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| start   | --download-only -p                                                       | binary-mirror-608000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
|         | binary-mirror-608000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50999                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-608000                                                  | binary-mirror-608000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
| addons  | enable dashboard -p                                                      | addons-066000        | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
|         | addons-066000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-066000        | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
|         | addons-066000                                                            |                      |         |         |                     |                     |
| start   | -p addons-066000 --wait=true                                             | addons-066000        | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-066000                                                         | addons-066000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
| start   | -p nospam-331000 -n=1 --memory=2250 --wait=false                         | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-331000 --log_dir                                                  | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-331000                                                         | nospam-331000        | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
| start   | -p functional-250000                                                     | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-250000                                                     | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-250000 cache add                                              | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-250000 cache add                                              | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-250000 cache add                                              | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-250000 cache add                                              | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | minikube-local-cache-test:functional-250000                              |                      |         |         |                     |                     |
| cache   | functional-250000 cache delete                                           | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | minikube-local-cache-test:functional-250000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
| ssh     | functional-250000 ssh sudo                                               | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-250000                                                        | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-250000 ssh                                                    | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-250000 cache reload                                           | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
| ssh     | functional-250000 ssh                                                    | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT | 02 Jul 24 21:19 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-250000 kubectl --                                             | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | --context functional-250000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-250000                                                     | functional-250000    | jenkins | v1.33.1 | 02 Jul 24 21:19 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/02 21:19:46
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.4 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0702 21:19:46.987356    6978 out.go:291] Setting OutFile to fd 1 ...
I0702 21:19:46.987470    6978 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:19:46.987473    6978 out.go:304] Setting ErrFile to fd 2...
I0702 21:19:46.987475    6978 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:19:46.987589    6978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
I0702 21:19:46.988587    6978 out.go:298] Setting JSON to false
I0702 21:19:47.004908    6978 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4755,"bootTime":1719975631,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0702 21:19:47.004964    6978 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0702 21:19:47.009264    6978 out.go:177] * [functional-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0702 21:19:47.018221    6978 out.go:177]   - MINIKUBE_LOCATION=19184
I0702 21:19:47.018250    6978 notify.go:220] Checking for updates...
I0702 21:19:47.023465    6978 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
I0702 21:19:47.026164    6978 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0702 21:19:47.029225    6978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0702 21:19:47.032174    6978 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
I0702 21:19:47.035149    6978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0702 21:19:47.038511    6978 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0702 21:19:47.038560    6978 driver.go:392] Setting default libvirt URI to qemu:///system
I0702 21:19:47.043144    6978 out.go:177] * Using the qemu2 driver based on existing profile
I0702 21:19:47.050131    6978 start.go:297] selected driver: qemu2
I0702 21:19:47.050135    6978 start.go:901] validating driver "qemu2" against &{Name:functional-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:functional-250000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0702 21:19:47.050189    6978 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0702 21:19:47.052538    6978 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0702 21:19:47.052579    6978 cni.go:84] Creating CNI manager for ""
I0702 21:19:47.052584    6978 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0702 21:19:47.052629    6978 start.go:340] cluster config:
{Name:functional-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-250000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0702 21:19:47.056310    6978 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0702 21:19:47.063085    6978 out.go:177] * Starting "functional-250000" primary control-plane node in "functional-250000" cluster
I0702 21:19:47.067062    6978 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0702 21:19:47.067074    6978 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
I0702 21:19:47.067081    6978 cache.go:56] Caching tarball of preloaded images
I0702 21:19:47.067137    6978 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0702 21:19:47.067141    6978 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0702 21:19:47.067186    6978 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/functional-250000/config.json ...
I0702 21:19:47.067635    6978 start.go:360] acquireMachinesLock for functional-250000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0702 21:19:47.067670    6978 start.go:364] duration metric: took 30.833µs to acquireMachinesLock for "functional-250000"
I0702 21:19:47.067679    6978 start.go:96] Skipping create...Using existing machine configuration
I0702 21:19:47.067681    6978 fix.go:54] fixHost starting: 
I0702 21:19:47.067804    6978 fix.go:112] recreateIfNeeded on functional-250000: state=Stopped err=<nil>
W0702 21:19:47.067812    6978 fix.go:138] unexpected machine state, will restart: <nil>
I0702 21:19:47.076149    6978 out.go:177] * Restarting existing qemu2 VM for "functional-250000" ...
I0702 21:19:47.079181    6978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:81:54:58:18:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/disk.qcow2
I0702 21:19:47.081398    6978 main.go:141] libmachine: STDOUT: 
I0702 21:19:47.081411    6978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0702 21:19:47.081441    6978 fix.go:56] duration metric: took 13.759208ms for fixHost
I0702 21:19:47.081444    6978 start.go:83] releasing machines lock for "functional-250000", held for 13.77125ms
W0702 21:19:47.081450    6978 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0702 21:19:47.081499    6978 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0702 21:19:47.081504    6978 start.go:728] Will try again in 5 seconds ...
I0702 21:19:52.083587    6978 start.go:360] acquireMachinesLock for functional-250000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0702 21:19:52.084024    6978 start.go:364] duration metric: took 331.667µs to acquireMachinesLock for "functional-250000"
I0702 21:19:52.084166    6978 start.go:96] Skipping create...Using existing machine configuration
I0702 21:19:52.084179    6978 fix.go:54] fixHost starting: 
I0702 21:19:52.084946    6978 fix.go:112] recreateIfNeeded on functional-250000: state=Stopped err=<nil>
W0702 21:19:52.084966    6978 fix.go:138] unexpected machine state, will restart: <nil>
I0702 21:19:52.088522    6978 out.go:177] * Restarting existing qemu2 VM for "functional-250000" ...
I0702 21:19:52.097454    6978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:81:54:58:18:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/functional-250000/disk.qcow2
I0702 21:19:52.106778    6978 main.go:141] libmachine: STDOUT: 
I0702 21:19:52.106844    6978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0702 21:19:52.106974    6978 fix.go:56] duration metric: took 22.757375ms for fixHost
I0702 21:19:52.106987    6978 start.go:83] releasing machines lock for "functional-250000", held for 22.949ms
W0702 21:19:52.107172    6978 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-250000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0702 21:19:52.115265    6978 out.go:177] 
W0702 21:19:52.119333    6978 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0702 21:19:52.119354    6978 out.go:239] * 
W0702 21:19:52.121985    6978 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0702 21:19:52.127260    6978 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-250000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-250000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.151458ms)

                                                
                                                
** stderr ** 
	error: context "functional-250000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-250000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-250000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-250000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-250000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-250000 --alsologtostderr -v=1] stderr:
I0702 21:20:35.440017    7292 out.go:291] Setting OutFile to fd 1 ...
I0702 21:20:35.440371    7292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:20:35.440375    7292 out.go:304] Setting ErrFile to fd 2...
I0702 21:20:35.440378    7292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:20:35.440516    7292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
I0702 21:20:35.440723    7292 mustload.go:65] Loading cluster: functional-250000
I0702 21:20:35.440927    7292 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0702 21:20:35.444467    7292 out.go:177] * The control-plane node functional-250000 host is not running: state=Stopped
I0702 21:20:35.448301    7292 out.go:177]   To start a cluster, run: "minikube start -p functional-250000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (43.136916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 status: exit status 7 (29.858875ms)

                                                
                                                
-- stdout --
	functional-250000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-250000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (29.84475ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-250000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 status -o json: exit status 7 (29.885083ms)

                                                
                                                
-- stdout --
	{"Name":"functional-250000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-250000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (29.466375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-250000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-250000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.152167ms)

                                                
                                                
** stderr ** 
	error: context "functional-250000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-250000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-250000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-250000 describe po hello-node-connect: exit status 1 (26.00675ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-250000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-250000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-250000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-250000 logs -l app=hello-node-connect: exit status 1 (26.479791ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-250000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-250000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-250000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-250000 describe svc hello-node-connect: exit status 1 (26.991417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-250000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-250000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (30.777791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-250000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (29.862875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "echo hello": exit status 83 (44.704208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-250000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-250000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-250000\"\n"*. args "out/minikube-darwin-arm64 -p functional-250000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "cat /etc/hostname": exit status 83 (44.935917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-250000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-250000"- but got *"* The control-plane node functional-250000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-250000\"\n"*. args "out/minikube-darwin-arm64 -p functional-250000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (33.8775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (53.949917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-250000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh -n functional-250000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh -n functional-250000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.053833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-250000 ssh -n functional-250000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-250000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-250000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 cp functional-250000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd907937849/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 cp functional-250000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd907937849/001/cp-test.txt: exit status 83 (40.727958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-250000 cp functional-250000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd907937849/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh -n functional-250000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh -n functional-250000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.846083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-250000 ssh -n functional-250000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd907937849/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-250000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-250000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (43.755875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-250000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh -n functional-250000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh -n functional-250000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (38.987291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-250000 ssh -n functional-250000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-250000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-250000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/6669/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /etc/test/nested/copy/6669/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /etc/test/nested/copy/6669/hosts": exit status 83 (40.380417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /etc/test/nested/copy/6669/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-250000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-250000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (30.311166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/6669.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /etc/ssl/certs/6669.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /etc/ssl/certs/6669.pem": exit status 83 (41.825166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/6669.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-250000 ssh \"sudo cat /etc/ssl/certs/6669.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6669.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-250000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-250000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/6669.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /usr/share/ca-certificates/6669.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /usr/share/ca-certificates/6669.pem": exit status 83 (39.740209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/6669.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-250000 ssh \"sudo cat /usr/share/ca-certificates/6669.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6669.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-250000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-250000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (47.700125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-250000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-250000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-250000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/66692.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /etc/ssl/certs/66692.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /etc/ssl/certs/66692.pem": exit status 83 (42.681875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/66692.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-250000 ssh \"sudo cat /etc/ssl/certs/66692.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/66692.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-250000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-250000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/66692.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /usr/share/ca-certificates/66692.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /usr/share/ca-certificates/66692.pem": exit status 83 (37.580833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/66692.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-250000 ssh \"sudo cat /usr/share/ca-certificates/66692.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/66692.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-250000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-250000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (38.70475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-250000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-250000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-250000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (29.250709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-250000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-250000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.078958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-250000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-250000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-250000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-250000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-250000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-250000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-250000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-250000 -n functional-250000: exit status 7 (30.02325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "sudo systemctl is-active crio": exit status 83 (39.504708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-250000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-250000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 version -o=json --components: exit status 83 (41.782792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-250000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-250000 image ls --format short --alsologtostderr:
I0702 21:20:35.840644    7307 out.go:291] Setting OutFile to fd 1 ...
I0702 21:20:35.840790    7307 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:20:35.840795    7307 out.go:304] Setting ErrFile to fd 2...
I0702 21:20:35.840797    7307 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:20:35.840937    7307 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
I0702 21:20:35.841410    7307 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0702 21:20:35.841473    7307 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-250000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-250000 image ls --format table --alsologtostderr:
I0702 21:20:36.064082    7319 out.go:291] Setting OutFile to fd 1 ...
I0702 21:20:36.064246    7319 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:20:36.064250    7319 out.go:304] Setting ErrFile to fd 2...
I0702 21:20:36.064252    7319 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:20:36.064383    7319 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
I0702 21:20:36.064767    7319 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0702 21:20:36.064833    7319 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-250000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-250000 image ls --format json --alsologtostderr:
I0702 21:20:36.029592    7317 out.go:291] Setting OutFile to fd 1 ...
I0702 21:20:36.029755    7317 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:20:36.029759    7317 out.go:304] Setting ErrFile to fd 2...
I0702 21:20:36.029761    7317 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:20:36.029873    7317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
I0702 21:20:36.030318    7317 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0702 21:20:36.030375    7317 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-250000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-250000 image ls --format yaml --alsologtostderr:
I0702 21:20:35.993121    7315 out.go:291] Setting OutFile to fd 1 ...
I0702 21:20:35.993281    7315 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:20:35.993285    7315 out.go:304] Setting ErrFile to fd 2...
I0702 21:20:35.993288    7315 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:20:35.993424    7315 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
I0702 21:20:35.993826    7315 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0702 21:20:35.993887    7315 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh pgrep buildkitd: exit status 83 (40.748667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image build -t localhost/my-image:functional-250000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-250000 image build -t localhost/my-image:functional-250000 testdata/build --alsologtostderr:
I0702 21:20:35.917538    7311 out.go:291] Setting OutFile to fd 1 ...
I0702 21:20:35.918014    7311 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:20:35.918019    7311 out.go:304] Setting ErrFile to fd 2...
I0702 21:20:35.918021    7311 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:20:35.918174    7311 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
I0702 21:20:35.918561    7311 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0702 21:20:35.918962    7311 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0702 21:20:35.919184    7311 build_images.go:133] succeeded building to: 
I0702 21:20:35.919188    7311 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image ls
functional_test.go:442: expected "localhost/my-image:functional-250000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-250000 docker-env) && out/minikube-darwin-arm64 status -p functional-250000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-250000 docker-env) && out/minikube-darwin-arm64 status -p functional-250000": exit status 1 (42.835917ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 update-context --alsologtostderr -v=2: exit status 83 (41.869875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:20:35.713751    7301 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:20:35.714165    7301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:20:35.714171    7301 out.go:304] Setting ErrFile to fd 2...
	I0702 21:20:35.714174    7301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:20:35.714361    7301 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:20:35.714575    7301 mustload.go:65] Loading cluster: functional-250000
	I0702 21:20:35.714751    7301 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:20:35.718723    7301 out.go:177] * The control-plane node functional-250000 host is not running: state=Stopped
	I0702 21:20:35.722639    7301 out.go:177]   To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-250000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-250000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-250000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 update-context --alsologtostderr -v=2: exit status 83 (42.715042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:20:35.798548    7305 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:20:35.798697    7305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:20:35.798700    7305 out.go:304] Setting ErrFile to fd 2...
	I0702 21:20:35.798702    7305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:20:35.798831    7305 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:20:35.799059    7305 mustload.go:65] Loading cluster: functional-250000
	I0702 21:20:35.799243    7305 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:20:35.803698    7305 out.go:177] * The control-plane node functional-250000 host is not running: state=Stopped
	I0702 21:20:35.807665    7305 out.go:177]   To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-250000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-250000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-250000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 update-context --alsologtostderr -v=2: exit status 83 (41.511708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:20:35.756141    7303 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:20:35.756301    7303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:20:35.756305    7303 out.go:304] Setting ErrFile to fd 2...
	I0702 21:20:35.756307    7303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:20:35.756435    7303 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:20:35.756658    7303 mustload.go:65] Loading cluster: functional-250000
	I0702 21:20:35.756840    7303 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:20:35.761661    7303 out.go:177] * The control-plane node functional-250000 host is not running: state=Stopped
	I0702 21:20:35.765606    7303 out.go:177]   To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-250000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-250000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-250000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-250000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-250000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.320666ms)

                                                
                                                
** stderr ** 
	error: context "functional-250000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-250000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 service list: exit status 83 (46.094125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-250000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-250000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-250000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 service list -o json: exit status 83 (43.921375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-250000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 service --namespace=default --https --url hello-node: exit status 83 (49.699333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-250000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 service hello-node --url --format={{.IP}}: exit status 83 (45.766709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-250000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-250000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-250000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 service hello-node --url: exit status 83 (43.955041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-250000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
functional_test.go:1565: failed to parse "* The control-plane node functional-250000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-250000\"": parse "* The control-plane node functional-250000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-250000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-250000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-250000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0702 21:19:53.945901    7095 out.go:291] Setting OutFile to fd 1 ...
I0702 21:19:53.946053    7095 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:19:53.946057    7095 out.go:304] Setting ErrFile to fd 2...
I0702 21:19:53.946060    7095 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:19:53.946186    7095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
I0702 21:19:53.946437    7095 mustload.go:65] Loading cluster: functional-250000
I0702 21:19:53.946638    7095 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0702 21:19:53.951932    7095 out.go:177] * The control-plane node functional-250000 host is not running: state=Stopped
I0702 21:19:53.964942    7095 out.go:177]   To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
stdout: * The control-plane node functional-250000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-250000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-250000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7096: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-250000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-250000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-250000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-250000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-250000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-250000": client config: context "functional-250000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (94.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-250000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-250000 get svc nginx-svc: exit status 1 (68.646209ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-250000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-250000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (94.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image load --daemon gcr.io/google-containers/addon-resizer:functional-250000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-250000 image load --daemon gcr.io/google-containers/addon-resizer:functional-250000 --alsologtostderr: (1.304535958s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-250000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image load --daemon gcr.io/google-containers/addon-resizer:functional-250000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-250000 image load --daemon gcr.io/google-containers/addon-resizer:functional-250000 --alsologtostderr: (1.291583375s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-250000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.278354333s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-250000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image load --daemon gcr.io/google-containers/addon-resizer:functional-250000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-250000 image load --daemon gcr.io/google-containers/addon-resizer:functional-250000 --alsologtostderr: (1.21388725s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-250000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image save gcr.io/google-containers/addon-resizer:functional-250000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-250000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.029281667s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (39.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (39.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-862000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-862000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.124500541s)

                                                
                                                
-- stdout --
	* [ha-862000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-862000" primary control-plane node in "ha-862000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-862000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:22:33.312349    7366 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:22:33.312473    7366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:22:33.312480    7366 out.go:304] Setting ErrFile to fd 2...
	I0702 21:22:33.312483    7366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:22:33.312630    7366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:22:33.313706    7366 out.go:298] Setting JSON to false
	I0702 21:22:33.330051    7366 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4922,"bootTime":1719975631,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:22:33.330133    7366 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:22:33.334083    7366 out.go:177] * [ha-862000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:22:33.340963    7366 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:22:33.341072    7366 notify.go:220] Checking for updates...
	I0702 21:22:33.347880    7366 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:22:33.350912    7366 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:22:33.353969    7366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:22:33.356954    7366 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:22:33.359945    7366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:22:33.363213    7366 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:22:33.366903    7366 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:22:33.373943    7366 start.go:297] selected driver: qemu2
	I0702 21:22:33.373951    7366 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:22:33.373956    7366 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:22:33.376280    7366 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:22:33.378820    7366 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:22:33.382015    7366 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:22:33.382043    7366 cni.go:84] Creating CNI manager for ""
	I0702 21:22:33.382047    7366 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0702 21:22:33.382050    7366 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0702 21:22:33.382081    7366 start.go:340] cluster config:
	{Name:ha-862000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:22:33.385886    7366 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:22:33.393903    7366 out.go:177] * Starting "ha-862000" primary control-plane node in "ha-862000" cluster
	I0702 21:22:33.397801    7366 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:22:33.397821    7366 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:22:33.397826    7366 cache.go:56] Caching tarball of preloaded images
	I0702 21:22:33.397887    7366 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:22:33.397892    7366 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:22:33.398082    7366 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/ha-862000/config.json ...
	I0702 21:22:33.398096    7366 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/ha-862000/config.json: {Name:mk9fdc5e4815186faacf40bd453a7824c60900bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:22:33.398425    7366 start.go:360] acquireMachinesLock for ha-862000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:22:33.398461    7366 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "ha-862000"
	I0702 21:22:33.398487    7366 start.go:93] Provisioning new machine with config: &{Name:ha-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.2 ClusterName:ha-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:22:33.398513    7366 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:22:33.402884    7366 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:22:33.418007    7366 start.go:159] libmachine.API.Create for "ha-862000" (driver="qemu2")
	I0702 21:22:33.418033    7366 client.go:168] LocalClient.Create starting
	I0702 21:22:33.418088    7366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:22:33.418116    7366 main.go:141] libmachine: Decoding PEM data...
	I0702 21:22:33.418123    7366 main.go:141] libmachine: Parsing certificate...
	I0702 21:22:33.418162    7366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:22:33.418189    7366 main.go:141] libmachine: Decoding PEM data...
	I0702 21:22:33.418199    7366 main.go:141] libmachine: Parsing certificate...
	I0702 21:22:33.418590    7366 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:22:33.547131    7366 main.go:141] libmachine: Creating SSH key...
	I0702 21:22:33.773569    7366 main.go:141] libmachine: Creating Disk image...
	I0702 21:22:33.773577    7366 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:22:33.773833    7366 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2
	I0702 21:22:33.783830    7366 main.go:141] libmachine: STDOUT: 
	I0702 21:22:33.783854    7366 main.go:141] libmachine: STDERR: 
	I0702 21:22:33.783905    7366 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2 +20000M
	I0702 21:22:33.791965    7366 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:22:33.791980    7366 main.go:141] libmachine: STDERR: 
	I0702 21:22:33.791990    7366 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2
	I0702 21:22:33.791996    7366 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:22:33.792020    7366 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:60:67:40:e2:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2
	I0702 21:22:33.793761    7366 main.go:141] libmachine: STDOUT: 
	I0702 21:22:33.793775    7366 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:22:33.793793    7366 client.go:171] duration metric: took 375.763166ms to LocalClient.Create
	I0702 21:22:35.795960    7366 start.go:128] duration metric: took 2.397467667s to createHost
	I0702 21:22:35.796029    7366 start.go:83] releasing machines lock for "ha-862000", held for 2.397606417s
	W0702 21:22:35.796098    7366 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:22:35.804340    7366 out.go:177] * Deleting "ha-862000" in qemu2 ...
	W0702 21:22:35.829455    7366 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:22:35.829496    7366 start.go:728] Will try again in 5 seconds ...
	I0702 21:22:40.831688    7366 start.go:360] acquireMachinesLock for ha-862000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:22:40.832251    7366 start.go:364] duration metric: took 448.542µs to acquireMachinesLock for "ha-862000"
	I0702 21:22:40.832381    7366 start.go:93] Provisioning new machine with config: &{Name:ha-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.2 ClusterName:ha-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:22:40.832756    7366 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:22:40.842508    7366 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:22:40.891106    7366 start.go:159] libmachine.API.Create for "ha-862000" (driver="qemu2")
	I0702 21:22:40.891165    7366 client.go:168] LocalClient.Create starting
	I0702 21:22:40.891275    7366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:22:40.891343    7366 main.go:141] libmachine: Decoding PEM data...
	I0702 21:22:40.891358    7366 main.go:141] libmachine: Parsing certificate...
	I0702 21:22:40.891425    7366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:22:40.891468    7366 main.go:141] libmachine: Decoding PEM data...
	I0702 21:22:40.891482    7366 main.go:141] libmachine: Parsing certificate...
	I0702 21:22:40.892227    7366 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:22:41.029791    7366 main.go:141] libmachine: Creating SSH key...
	I0702 21:22:41.346489    7366 main.go:141] libmachine: Creating Disk image...
	I0702 21:22:41.346503    7366 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:22:41.346706    7366 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2
	I0702 21:22:41.356501    7366 main.go:141] libmachine: STDOUT: 
	I0702 21:22:41.356522    7366 main.go:141] libmachine: STDERR: 
	I0702 21:22:41.356581    7366 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2 +20000M
	I0702 21:22:41.364417    7366 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:22:41.364432    7366 main.go:141] libmachine: STDERR: 
	I0702 21:22:41.364444    7366 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2
	I0702 21:22:41.364448    7366 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:22:41.364486    7366 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:76:bb:61:40:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2
	I0702 21:22:41.366132    7366 main.go:141] libmachine: STDOUT: 
	I0702 21:22:41.366155    7366 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:22:41.366171    7366 client.go:171] duration metric: took 475.008625ms to LocalClient.Create
	I0702 21:22:43.368368    7366 start.go:128] duration metric: took 2.535595541s to createHost
	I0702 21:22:43.368512    7366 start.go:83] releasing machines lock for "ha-862000", held for 2.536285959s
	W0702 21:22:43.368936    7366 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-862000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-862000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:22:43.377459    7366 out.go:177] 
	W0702 21:22:43.382762    7366 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:22:43.382793    7366 out.go:239] * 
	* 
	W0702 21:22:43.385385    7366 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:22:43.394531    7366 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-862000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (67.234458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (110.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (61.825708ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-862000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- rollout status deployment/busybox: exit status 1 (57.686167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.153959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.568292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.678042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.173125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.185625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (88.812792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.995666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.255708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.99625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.491375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.254334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.866083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.289292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.057625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.532292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (30.215084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (110.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-862000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.858459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-862000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (29.262125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-862000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-862000 -v=7 --alsologtostderr: exit status 83 (41.935458ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-862000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-862000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:24:34.495305    7475 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:24:34.495885    7475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:34.495890    7475 out.go:304] Setting ErrFile to fd 2...
	I0702 21:24:34.495893    7475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:34.496066    7475 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:24:34.496289    7475 mustload.go:65] Loading cluster: ha-862000
	I0702 21:24:34.496486    7475 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:24:34.500752    7475 out.go:177] * The control-plane node ha-862000 host is not running: state=Stopped
	I0702 21:24:34.504747    7475 out.go:177]   To start a cluster, run: "minikube start -p ha-862000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-862000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (30.083709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-862000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-862000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (25.924042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-862000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-862000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-862000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (29.872792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-862000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-862000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-862000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-862000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-862000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-862000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-862000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-862000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (30.34575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 status --output json -v=7 --alsologtostderr: exit status 7 (29.448291ms)

                                                
                                                
-- stdout --
	{"Name":"ha-862000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:24:34.699201    7487 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:24:34.699345    7487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:34.699349    7487 out.go:304] Setting ErrFile to fd 2...
	I0702 21:24:34.699352    7487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:34.699473    7487 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:24:34.699597    7487 out.go:298] Setting JSON to true
	I0702 21:24:34.699609    7487 mustload.go:65] Loading cluster: ha-862000
	I0702 21:24:34.699675    7487 notify.go:220] Checking for updates...
	I0702 21:24:34.699822    7487 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:24:34.699828    7487 status.go:255] checking status of ha-862000 ...
	I0702 21:24:34.700040    7487 status.go:330] ha-862000 host status = "Stopped" (err=<nil>)
	I0702 21:24:34.700044    7487 status.go:343] host is not running, skipping remaining checks
	I0702 21:24:34.700046    7487 status.go:257] ha-862000 status: &{Name:ha-862000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-862000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (29.615208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 node stop m02 -v=7 --alsologtostderr: exit status 85 (48.398041ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:24:34.759867    7491 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:24:34.760447    7491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:34.760452    7491 out.go:304] Setting ErrFile to fd 2...
	I0702 21:24:34.760454    7491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:34.760637    7491 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:24:34.760871    7491 mustload.go:65] Loading cluster: ha-862000
	I0702 21:24:34.761052    7491 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:24:34.765690    7491 out.go:177] 
	W0702 21:24:34.768709    7491 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0702 21:24:34.768714    7491 out.go:239] * 
	* 
	W0702 21:24:34.770836    7491 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:24:34.774555    7491 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-862000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr: exit status 7 (30.050791ms)

                                                
                                                
-- stdout --
	ha-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:24:34.808039    7493 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:24:34.808189    7493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:34.808193    7493 out.go:304] Setting ErrFile to fd 2...
	I0702 21:24:34.808196    7493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:34.808339    7493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:24:34.808463    7493 out.go:298] Setting JSON to false
	I0702 21:24:34.808476    7493 mustload.go:65] Loading cluster: ha-862000
	I0702 21:24:34.808522    7493 notify.go:220] Checking for updates...
	I0702 21:24:34.808664    7493 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:24:34.808669    7493 status.go:255] checking status of ha-862000 ...
	I0702 21:24:34.808901    7493 status.go:330] ha-862000 host status = "Stopped" (err=<nil>)
	I0702 21:24:34.808905    7493 status.go:343] host is not running, skipping remaining checks
	I0702 21:24:34.808907    7493 status.go:257] ha-862000 status: &{Name:ha-862000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr": ha-862000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr": ha-862000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr": ha-862000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr": ha-862000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (30.357958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-862000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-862000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-862000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-862000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (29.2945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 node start m02 -v=7 --alsologtostderr: exit status 85 (47.38225ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:24:34.944684    7502 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:24:34.945108    7502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:34.945113    7502 out.go:304] Setting ErrFile to fd 2...
	I0702 21:24:34.945115    7502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:34.945265    7502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:24:34.945564    7502 mustload.go:65] Loading cluster: ha-862000
	I0702 21:24:34.945747    7502 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:24:34.948732    7502 out.go:177] 
	W0702 21:24:34.952702    7502 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0702 21:24:34.952707    7502 out.go:239] * 
	* 
	W0702 21:24:34.954575    7502 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:24:34.958660    7502 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0702 21:24:34.944684    7502 out.go:291] Setting OutFile to fd 1 ...
I0702 21:24:34.945108    7502 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:24:34.945113    7502 out.go:304] Setting ErrFile to fd 2...
I0702 21:24:34.945115    7502 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:24:34.945265    7502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
I0702 21:24:34.945564    7502 mustload.go:65] Loading cluster: ha-862000
I0702 21:24:34.945747    7502 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0702 21:24:34.948732    7502 out.go:177] 
W0702 21:24:34.952702    7502 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0702 21:24:34.952707    7502 out.go:239] * 
* 
W0702 21:24:34.954575    7502 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0702 21:24:34.958660    7502 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-862000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr: exit status 7 (30.229667ms)

                                                
                                                
-- stdout --
	ha-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:24:34.992141    7504 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:24:34.992326    7504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:34.992330    7504 out.go:304] Setting ErrFile to fd 2...
	I0702 21:24:34.992332    7504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:34.992480    7504 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:24:34.992615    7504 out.go:298] Setting JSON to false
	I0702 21:24:34.992630    7504 mustload.go:65] Loading cluster: ha-862000
	I0702 21:24:34.992687    7504 notify.go:220] Checking for updates...
	I0702 21:24:34.992831    7504 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:24:34.992837    7504 status.go:255] checking status of ha-862000 ...
	I0702 21:24:34.993040    7504 status.go:330] ha-862000 host status = "Stopped" (err=<nil>)
	I0702 21:24:34.993044    7504 status.go:343] host is not running, skipping remaining checks
	I0702 21:24:34.993046    7504 status.go:257] ha-862000 status: &{Name:ha-862000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr: exit status 7 (74.252209ms)

                                                
                                                
-- stdout --
	ha-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:24:35.806922    7506 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:24:35.807138    7506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:35.807143    7506 out.go:304] Setting ErrFile to fd 2...
	I0702 21:24:35.807147    7506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:35.807313    7506 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:24:35.807469    7506 out.go:298] Setting JSON to false
	I0702 21:24:35.807485    7506 mustload.go:65] Loading cluster: ha-862000
	I0702 21:24:35.807509    7506 notify.go:220] Checking for updates...
	I0702 21:24:35.807746    7506 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:24:35.807755    7506 status.go:255] checking status of ha-862000 ...
	I0702 21:24:35.808023    7506 status.go:330] ha-862000 host status = "Stopped" (err=<nil>)
	I0702 21:24:35.808028    7506 status.go:343] host is not running, skipping remaining checks
	I0702 21:24:35.808031    7506 status.go:257] ha-862000 status: &{Name:ha-862000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr: exit status 7 (73.324083ms)

                                                
                                                
-- stdout --
	ha-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:24:36.940876    7508 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:24:36.941055    7508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:36.941061    7508 out.go:304] Setting ErrFile to fd 2...
	I0702 21:24:36.941064    7508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:36.941246    7508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:24:36.941421    7508 out.go:298] Setting JSON to false
	I0702 21:24:36.941436    7508 mustload.go:65] Loading cluster: ha-862000
	I0702 21:24:36.941477    7508 notify.go:220] Checking for updates...
	I0702 21:24:36.941682    7508 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:24:36.941689    7508 status.go:255] checking status of ha-862000 ...
	I0702 21:24:36.941986    7508 status.go:330] ha-862000 host status = "Stopped" (err=<nil>)
	I0702 21:24:36.941991    7508 status.go:343] host is not running, skipping remaining checks
	I0702 21:24:36.941994    7508 status.go:257] ha-862000 status: &{Name:ha-862000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr: exit status 7 (73.674458ms)

                                                
                                                
-- stdout --
	ha-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:24:38.956530    7510 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:24:38.956726    7510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:38.956732    7510 out.go:304] Setting ErrFile to fd 2...
	I0702 21:24:38.956735    7510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:38.956896    7510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:24:38.957049    7510 out.go:298] Setting JSON to false
	I0702 21:24:38.957065    7510 mustload.go:65] Loading cluster: ha-862000
	I0702 21:24:38.957092    7510 notify.go:220] Checking for updates...
	I0702 21:24:38.957310    7510 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:24:38.957318    7510 status.go:255] checking status of ha-862000 ...
	I0702 21:24:38.957593    7510 status.go:330] ha-862000 host status = "Stopped" (err=<nil>)
	I0702 21:24:38.957598    7510 status.go:343] host is not running, skipping remaining checks
	I0702 21:24:38.957601    7510 status.go:257] ha-862000 status: &{Name:ha-862000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr: exit status 7 (72.300958ms)

                                                
                                                
-- stdout --
	ha-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:24:41.266568    7512 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:24:41.266758    7512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:41.266764    7512 out.go:304] Setting ErrFile to fd 2...
	I0702 21:24:41.266767    7512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:41.266932    7512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:24:41.267090    7512 out.go:298] Setting JSON to false
	I0702 21:24:41.267105    7512 mustload.go:65] Loading cluster: ha-862000
	I0702 21:24:41.267140    7512 notify.go:220] Checking for updates...
	I0702 21:24:41.267367    7512 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:24:41.267374    7512 status.go:255] checking status of ha-862000 ...
	I0702 21:24:41.267663    7512 status.go:330] ha-862000 host status = "Stopped" (err=<nil>)
	I0702 21:24:41.267668    7512 status.go:343] host is not running, skipping remaining checks
	I0702 21:24:41.267671    7512 status.go:257] ha-862000 status: &{Name:ha-862000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr: exit status 7 (74.024ms)

                                                
                                                
-- stdout --
	ha-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:24:44.142181    7514 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:24:44.142370    7514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:44.142375    7514 out.go:304] Setting ErrFile to fd 2...
	I0702 21:24:44.142378    7514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:44.142541    7514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:24:44.142691    7514 out.go:298] Setting JSON to false
	I0702 21:24:44.142711    7514 mustload.go:65] Loading cluster: ha-862000
	I0702 21:24:44.142747    7514 notify.go:220] Checking for updates...
	I0702 21:24:44.142941    7514 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:24:44.142949    7514 status.go:255] checking status of ha-862000 ...
	I0702 21:24:44.143254    7514 status.go:330] ha-862000 host status = "Stopped" (err=<nil>)
	I0702 21:24:44.143259    7514 status.go:343] host is not running, skipping remaining checks
	I0702 21:24:44.143262    7514 status.go:257] ha-862000 status: &{Name:ha-862000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr: exit status 7 (72.596792ms)

                                                
                                                
-- stdout --
	ha-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:24:49.562836    7521 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:24:49.563071    7521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:49.563077    7521 out.go:304] Setting ErrFile to fd 2...
	I0702 21:24:49.563081    7521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:49.563241    7521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:24:49.563397    7521 out.go:298] Setting JSON to false
	I0702 21:24:49.563416    7521 mustload.go:65] Loading cluster: ha-862000
	I0702 21:24:49.563449    7521 notify.go:220] Checking for updates...
	I0702 21:24:49.563677    7521 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:24:49.563690    7521 status.go:255] checking status of ha-862000 ...
	I0702 21:24:49.563967    7521 status.go:330] ha-862000 host status = "Stopped" (err=<nil>)
	I0702 21:24:49.563972    7521 status.go:343] host is not running, skipping remaining checks
	I0702 21:24:49.563975    7521 status.go:257] ha-862000 status: &{Name:ha-862000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr: exit status 7 (71.97175ms)

                                                
                                                
-- stdout --
	ha-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:24:56.752583    7524 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:24:56.752889    7524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:56.752899    7524 out.go:304] Setting ErrFile to fd 2...
	I0702 21:24:56.752905    7524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:24:56.753257    7524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:24:56.753601    7524 out.go:298] Setting JSON to false
	I0702 21:24:56.753628    7524 mustload.go:65] Loading cluster: ha-862000
	I0702 21:24:56.753672    7524 notify.go:220] Checking for updates...
	I0702 21:24:56.754033    7524 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:24:56.754050    7524 status.go:255] checking status of ha-862000 ...
	I0702 21:24:56.754323    7524 status.go:330] ha-862000 host status = "Stopped" (err=<nil>)
	I0702 21:24:56.754328    7524 status.go:343] host is not running, skipping remaining checks
	I0702 21:24:56.754331    7524 status.go:257] ha-862000 status: &{Name:ha-862000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr: exit status 7 (72.12025ms)

                                                
                                                
-- stdout --
	ha-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:25:18.196924    7533 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:25:18.197189    7533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:18.197196    7533 out.go:304] Setting ErrFile to fd 2...
	I0702 21:25:18.197199    7533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:18.197386    7533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:25:18.197602    7533 out.go:298] Setting JSON to false
	I0702 21:25:18.197622    7533 mustload.go:65] Loading cluster: ha-862000
	I0702 21:25:18.197662    7533 notify.go:220] Checking for updates...
	I0702 21:25:18.197933    7533 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:25:18.197942    7533 status.go:255] checking status of ha-862000 ...
	I0702 21:25:18.198274    7533 status.go:330] ha-862000 host status = "Stopped" (err=<nil>)
	I0702 21:25:18.198279    7533 status.go:343] host is not running, skipping remaining checks
	I0702 21:25:18.198282    7533 status.go:257] ha-862000 status: &{Name:ha-862000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (34.078334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (43.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-862000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-862000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-862000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-862000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-862000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-862000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-862000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-862000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (29.360708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-862000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-862000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-862000 -v=7 --alsologtostderr: (3.619040084s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-862000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-862000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.22355675s)

                                                
                                                
-- stdout --
	* [ha-862000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-862000" primary control-plane node in "ha-862000" cluster
	* Restarting existing qemu2 VM for "ha-862000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-862000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:25:22.021346    7565 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:25:22.021518    7565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:22.021524    7565 out.go:304] Setting ErrFile to fd 2...
	I0702 21:25:22.021527    7565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:22.021708    7565 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:25:22.023026    7565 out.go:298] Setting JSON to false
	I0702 21:25:22.042242    7565 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5091,"bootTime":1719975631,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:25:22.042302    7565 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:25:22.047032    7565 out.go:177] * [ha-862000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:25:22.054973    7565 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:25:22.055047    7565 notify.go:220] Checking for updates...
	I0702 21:25:22.061955    7565 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:25:22.064927    7565 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:25:22.067959    7565 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:25:22.070843    7565 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:25:22.073949    7565 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:25:22.077244    7565 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:25:22.077305    7565 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:25:22.079881    7565 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:25:22.086969    7565 start.go:297] selected driver: qemu2
	I0702 21:25:22.086976    7565 start.go:901] validating driver "qemu2" against &{Name:ha-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:25:22.087028    7565 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:25:22.089354    7565 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:25:22.089410    7565 cni.go:84] Creating CNI manager for ""
	I0702 21:25:22.089415    7565 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0702 21:25:22.089459    7565 start.go:340] cluster config:
	{Name:ha-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-862000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:25:22.093308    7565 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:25:22.101942    7565 out.go:177] * Starting "ha-862000" primary control-plane node in "ha-862000" cluster
	I0702 21:25:22.105931    7565 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:25:22.105946    7565 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:25:22.105958    7565 cache.go:56] Caching tarball of preloaded images
	I0702 21:25:22.106023    7565 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:25:22.106029    7565 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:25:22.106086    7565 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/ha-862000/config.json ...
	I0702 21:25:22.106521    7565 start.go:360] acquireMachinesLock for ha-862000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:25:22.106560    7565 start.go:364] duration metric: took 31.959µs to acquireMachinesLock for "ha-862000"
	I0702 21:25:22.106571    7565 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:25:22.106576    7565 fix.go:54] fixHost starting: 
	I0702 21:25:22.106704    7565 fix.go:112] recreateIfNeeded on ha-862000: state=Stopped err=<nil>
	W0702 21:25:22.106713    7565 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:25:22.114981    7565 out.go:177] * Restarting existing qemu2 VM for "ha-862000" ...
	I0702 21:25:22.118952    7565 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:76:bb:61:40:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2
	I0702 21:25:22.121269    7565 main.go:141] libmachine: STDOUT: 
	I0702 21:25:22.121290    7565 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:25:22.121322    7565 fix.go:56] duration metric: took 14.746333ms for fixHost
	I0702 21:25:22.121327    7565 start.go:83] releasing machines lock for "ha-862000", held for 14.762584ms
	W0702 21:25:22.121334    7565 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:25:22.121367    7565 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:25:22.121373    7565 start.go:728] Will try again in 5 seconds ...
	I0702 21:25:27.123515    7565 start.go:360] acquireMachinesLock for ha-862000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:25:27.123884    7565 start.go:364] duration metric: took 293.875µs to acquireMachinesLock for "ha-862000"
	I0702 21:25:27.124008    7565 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:25:27.124028    7565 fix.go:54] fixHost starting: 
	I0702 21:25:27.124734    7565 fix.go:112] recreateIfNeeded on ha-862000: state=Stopped err=<nil>
	W0702 21:25:27.124759    7565 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:25:27.128249    7565 out.go:177] * Restarting existing qemu2 VM for "ha-862000" ...
	I0702 21:25:27.135369    7565 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:76:bb:61:40:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2
	I0702 21:25:27.144419    7565 main.go:141] libmachine: STDOUT: 
	I0702 21:25:27.144490    7565 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:25:27.144577    7565 fix.go:56] duration metric: took 20.544625ms for fixHost
	I0702 21:25:27.144597    7565 start.go:83] releasing machines lock for "ha-862000", held for 20.691958ms
	W0702 21:25:27.144781    7565 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-862000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-862000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:25:27.153267    7565 out.go:177] 
	W0702 21:25:27.157088    7565 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:25:27.157112    7565 out.go:239] * 
	* 
	W0702 21:25:27.159499    7565 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:25:27.167185    7565 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-862000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-862000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (33.387791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.003292ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-862000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-862000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:25:27.309554    7577 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:25:27.310157    7577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:27.310166    7577 out.go:304] Setting ErrFile to fd 2...
	I0702 21:25:27.310169    7577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:27.310326    7577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:25:27.310531    7577 mustload.go:65] Loading cluster: ha-862000
	I0702 21:25:27.310714    7577 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:25:27.314242    7577 out.go:177] * The control-plane node ha-862000 host is not running: state=Stopped
	I0702 21:25:27.317212    7577 out.go:177]   To start a cluster, run: "minikube start -p ha-862000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-862000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr: exit status 7 (28.7115ms)

                                                
                                                
-- stdout --
	ha-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:25:27.348154    7579 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:25:27.348292    7579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:27.348296    7579 out.go:304] Setting ErrFile to fd 2...
	I0702 21:25:27.348299    7579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:27.348415    7579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:25:27.348528    7579 out.go:298] Setting JSON to false
	I0702 21:25:27.348541    7579 mustload.go:65] Loading cluster: ha-862000
	I0702 21:25:27.348596    7579 notify.go:220] Checking for updates...
	I0702 21:25:27.348726    7579 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:25:27.348731    7579 status.go:255] checking status of ha-862000 ...
	I0702 21:25:27.348944    7579 status.go:330] ha-862000 host status = "Stopped" (err=<nil>)
	I0702 21:25:27.348948    7579 status.go:343] host is not running, skipping remaining checks
	I0702 21:25:27.348950    7579 status.go:257] ha-862000 status: &{Name:ha-862000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (29.370334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-862000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-862000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-862000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-862000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (28.915667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-862000 stop -v=7 --alsologtostderr: (3.313704166s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr: exit status 7 (66.123792ms)

                                                
                                                
-- stdout --
	ha-862000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:25:30.832061    7606 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:25:30.832271    7606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:30.832276    7606 out.go:304] Setting ErrFile to fd 2...
	I0702 21:25:30.832279    7606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:30.832442    7606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:25:30.832611    7606 out.go:298] Setting JSON to false
	I0702 21:25:30.832627    7606 mustload.go:65] Loading cluster: ha-862000
	I0702 21:25:30.832660    7606 notify.go:220] Checking for updates...
	I0702 21:25:30.832893    7606 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:25:30.832900    7606 status.go:255] checking status of ha-862000 ...
	I0702 21:25:30.833202    7606 status.go:330] ha-862000 host status = "Stopped" (err=<nil>)
	I0702 21:25:30.833207    7606 status.go:343] host is not running, skipping remaining checks
	I0702 21:25:30.833210    7606 status.go:257] ha-862000 status: &{Name:ha-862000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr": ha-862000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr": ha-862000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-862000 status -v=7 --alsologtostderr": ha-862000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (32.279125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-862000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-862000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.185686834s)

                                                
                                                
-- stdout --
	* [ha-862000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-862000" primary control-plane node in "ha-862000" cluster
	* Restarting existing qemu2 VM for "ha-862000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-862000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:25:30.894709    7610 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:25:30.894917    7610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:30.894980    7610 out.go:304] Setting ErrFile to fd 2...
	I0702 21:25:30.894986    7610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:30.895227    7610 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:25:30.896391    7610 out.go:298] Setting JSON to false
	I0702 21:25:30.912765    7610 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5099,"bootTime":1719975631,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:25:30.912830    7610 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:25:30.918245    7610 out.go:177] * [ha-862000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:25:30.922668    7610 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:25:30.922705    7610 notify.go:220] Checking for updates...
	I0702 21:25:30.932116    7610 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:25:30.933212    7610 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:25:30.936116    7610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:25:30.939143    7610 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:25:30.942136    7610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:25:30.945478    7610 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:25:30.945752    7610 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:25:30.950116    7610 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:25:30.957096    7610 start.go:297] selected driver: qemu2
	I0702 21:25:30.957103    7610 start.go:901] validating driver "qemu2" against &{Name:ha-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:25:30.957168    7610 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:25:30.959500    7610 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:25:30.959551    7610 cni.go:84] Creating CNI manager for ""
	I0702 21:25:30.959556    7610 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0702 21:25:30.959610    7610 start.go:340] cluster config:
	{Name:ha-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-862000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:25:30.963172    7610 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:25:30.972087    7610 out.go:177] * Starting "ha-862000" primary control-plane node in "ha-862000" cluster
	I0702 21:25:30.976123    7610 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:25:30.976140    7610 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:25:30.976150    7610 cache.go:56] Caching tarball of preloaded images
	I0702 21:25:30.976212    7610 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:25:30.976218    7610 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:25:30.976277    7610 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/ha-862000/config.json ...
	I0702 21:25:30.976677    7610 start.go:360] acquireMachinesLock for ha-862000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:25:30.976705    7610 start.go:364] duration metric: took 22.416µs to acquireMachinesLock for "ha-862000"
	I0702 21:25:30.976715    7610 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:25:30.976721    7610 fix.go:54] fixHost starting: 
	I0702 21:25:30.976844    7610 fix.go:112] recreateIfNeeded on ha-862000: state=Stopped err=<nil>
	W0702 21:25:30.976852    7610 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:25:30.985090    7610 out.go:177] * Restarting existing qemu2 VM for "ha-862000" ...
	I0702 21:25:30.989202    7610 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:76:bb:61:40:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2
	I0702 21:25:30.991403    7610 main.go:141] libmachine: STDOUT: 
	I0702 21:25:30.991426    7610 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:25:30.991453    7610 fix.go:56] duration metric: took 14.731125ms for fixHost
	I0702 21:25:30.991458    7610 start.go:83] releasing machines lock for "ha-862000", held for 14.74825ms
	W0702 21:25:30.991463    7610 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:25:30.991490    7610 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:25:30.991495    7610 start.go:728] Will try again in 5 seconds ...
	I0702 21:25:35.993551    7610 start.go:360] acquireMachinesLock for ha-862000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:25:35.993915    7610 start.go:364] duration metric: took 248.459µs to acquireMachinesLock for "ha-862000"
	I0702 21:25:35.994042    7610 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:25:35.994062    7610 fix.go:54] fixHost starting: 
	I0702 21:25:35.994754    7610 fix.go:112] recreateIfNeeded on ha-862000: state=Stopped err=<nil>
	W0702 21:25:35.994781    7610 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:25:35.998399    7610 out.go:177] * Restarting existing qemu2 VM for "ha-862000" ...
	I0702 21:25:36.007372    7610 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:76:bb:61:40:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/ha-862000/disk.qcow2
	I0702 21:25:36.016467    7610 main.go:141] libmachine: STDOUT: 
	I0702 21:25:36.016527    7610 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:25:36.016592    7610 fix.go:56] duration metric: took 22.530459ms for fixHost
	I0702 21:25:36.016615    7610 start.go:83] releasing machines lock for "ha-862000", held for 22.675166ms
	W0702 21:25:36.016806    7610 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-862000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-862000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:25:36.024069    7610 out.go:177] 
	W0702 21:25:36.028221    7610 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:25:36.028257    7610 out.go:239] * 
	* 
	W0702 21:25:36.030775    7610 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:25:36.038271    7610 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-862000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (67.787083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-862000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-862000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-862000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-862000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (29.713709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-862000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-862000 --control-plane -v=7 --alsologtostderr: exit status 83 (43.047334ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-862000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-862000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:25:36.229260    7625 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:25:36.229417    7625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:36.229421    7625 out.go:304] Setting ErrFile to fd 2...
	I0702 21:25:36.229424    7625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:25:36.229556    7625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:25:36.229822    7625 mustload.go:65] Loading cluster: ha-862000
	I0702 21:25:36.230019    7625 config.go:182] Loaded profile config "ha-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:25:36.234787    7625 out.go:177] * The control-plane node ha-862000 host is not running: state=Stopped
	I0702 21:25:36.238805    7625 out.go:177]   To start a cluster, run: "minikube start -p ha-862000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-862000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (29.980833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-862000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-862000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-862000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-862000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-862000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-862000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-862000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-862000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-862000 -n ha-862000: exit status 7 (29.017958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-862000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-589000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-589000 --driver=qemu2 : exit status 80 (9.830045875s)

                                                
                                                
-- stdout --
	* [image-589000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-589000" primary control-plane node in "image-589000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-589000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-589000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-589000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-589000 -n image-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-589000 -n image-589000: exit status 7 (67.685917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-589000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-744000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-744000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.672207625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"64fe8a65-0dc9-41d0-8f7a-e4a9fda383b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-744000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4ac86b9-3fc0-47bb-a63a-d8e3a8bd5fda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19184"}}
	{"specversion":"1.0","id":"ae6bdee7-8e72-4441-9ced-5782819b98cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig"}}
	{"specversion":"1.0","id":"e0aefce1-8807-49ab-8d78-15a78f10872d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"52d6dc1c-d9e3-41bd-8d15-9f67e77e3ef9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"92cd5bd5-8244-4f7f-9b39-a3f3eb966172","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube"}}
	{"specversion":"1.0","id":"70433d0f-9b68-448b-b102-6e3e123c2b4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9bf7695a-e1ad-4463-8c8b-05964293f229","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"93e0cbf2-31b4-41e2-9da6-02208714610b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"09fc98d2-a8e7-435f-9150-d0b21772bb92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-744000\" primary control-plane node in \"json-output-744000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"faf91f3c-1e13-461d-bb9c-ede506449561","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"70a0a204-64fc-496d-bee7-d3bf773f4d44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-744000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"c2ce4f78-a91e-4406-b4cc-b047ea0d51bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"8a79df5a-ff26-426f-a5b1-5c3ef367b43a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c0f8a2ec-ebd2-482a-a932-556a5d68a3f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-744000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"5f0ac7a3-1af4-41f6-ad4d-0f794dcaf6b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"2e9bfd90-5c0e-49f5-908c-0fd039d8659b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-744000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-744000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-744000 --output=json --user=testUser: exit status 83 (77.161458ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"53956e91-2ce8-455d-bdb5-773840cc36a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-744000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"75c6c6c6-99e1-4764-8d2e-8691c1939a1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-744000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-744000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-744000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-744000 --output=json --user=testUser: exit status 83 (44.112375ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-744000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-744000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-744000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-744000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (9.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-685000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-685000 --driver=qemu2 : exit status 80 (9.703455875s)

                                                
                                                
-- stdout --
	* [first-685000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-685000" primary control-plane node in "first-685000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-685000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-685000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-685000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-02 21:26:08.022724 -0700 PDT m=+456.726106918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-687000 -n second-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-687000 -n second-687000: exit status 85 (79.84625ms)

                                                
                                                
-- stdout --
	* Profile "second-687000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-687000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-687000" host is not running, skipping log retrieval (state="* Profile \"second-687000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-687000\"")
helpers_test.go:175: Cleaning up "second-687000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-687000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-02 21:26:08.207599 -0700 PDT m=+456.910985001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-685000 -n first-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-685000 -n first-685000: exit status 7 (29.335083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-685000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-685000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-685000
--- FAIL: TestMinikubeProfile (9.99s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-796000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-796000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.04838625s)

                                                
                                                
-- stdout --
	* [mount-start-1-796000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-796000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-796000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-796000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-796000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-796000 -n mount-start-1-796000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-796000 -n mount-start-1-796000: exit status 7 (71.805792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-796000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.12s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-547000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-547000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.831779875s)

                                                
                                                
-- stdout --
	* [multinode-547000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-547000" primary control-plane node in "multinode-547000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-547000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:26:18.632446    7773 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:26:18.632575    7773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:26:18.632580    7773 out.go:304] Setting ErrFile to fd 2...
	I0702 21:26:18.632582    7773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:26:18.632701    7773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:26:18.633737    7773 out.go:298] Setting JSON to false
	I0702 21:26:18.650038    7773 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5147,"bootTime":1719975631,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:26:18.650104    7773 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:26:18.655696    7773 out.go:177] * [multinode-547000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:26:18.662673    7773 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:26:18.662761    7773 notify.go:220] Checking for updates...
	I0702 21:26:18.669605    7773 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:26:18.672610    7773 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:26:18.675662    7773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:26:18.678564    7773 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:26:18.681636    7773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:26:18.684827    7773 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:26:18.688597    7773 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:26:18.695624    7773 start.go:297] selected driver: qemu2
	I0702 21:26:18.695631    7773 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:26:18.695637    7773 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:26:18.697889    7773 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:26:18.700559    7773 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:26:18.703681    7773 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:26:18.703708    7773 cni.go:84] Creating CNI manager for ""
	I0702 21:26:18.703713    7773 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0702 21:26:18.703717    7773 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0702 21:26:18.703747    7773 start.go:340] cluster config:
	{Name:multinode-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:26:18.707417    7773 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:26:18.715611    7773 out.go:177] * Starting "multinode-547000" primary control-plane node in "multinode-547000" cluster
	I0702 21:26:18.719643    7773 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:26:18.719662    7773 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:26:18.719670    7773 cache.go:56] Caching tarball of preloaded images
	I0702 21:26:18.719735    7773 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:26:18.719741    7773 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:26:18.719945    7773 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/multinode-547000/config.json ...
	I0702 21:26:18.719961    7773 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/multinode-547000/config.json: {Name:mke19e437e053e68f6aaef80a577cdefc8734c25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:26:18.720282    7773 start.go:360] acquireMachinesLock for multinode-547000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:26:18.720315    7773 start.go:364] duration metric: took 28.084µs to acquireMachinesLock for "multinode-547000"
	I0702 21:26:18.720328    7773 start.go:93] Provisioning new machine with config: &{Name:multinode-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:multinode-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:26:18.720363    7773 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:26:18.728617    7773 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:26:18.746057    7773 start.go:159] libmachine.API.Create for "multinode-547000" (driver="qemu2")
	I0702 21:26:18.746090    7773 client.go:168] LocalClient.Create starting
	I0702 21:26:18.746171    7773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:26:18.746200    7773 main.go:141] libmachine: Decoding PEM data...
	I0702 21:26:18.746210    7773 main.go:141] libmachine: Parsing certificate...
	I0702 21:26:18.746241    7773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:26:18.746265    7773 main.go:141] libmachine: Decoding PEM data...
	I0702 21:26:18.746277    7773 main.go:141] libmachine: Parsing certificate...
	I0702 21:26:18.746732    7773 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:26:18.873260    7773 main.go:141] libmachine: Creating SSH key...
	I0702 21:26:18.957696    7773 main.go:141] libmachine: Creating Disk image...
	I0702 21:26:18.957705    7773 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:26:18.957882    7773 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2
	I0702 21:26:18.966955    7773 main.go:141] libmachine: STDOUT: 
	I0702 21:26:18.966972    7773 main.go:141] libmachine: STDERR: 
	I0702 21:26:18.967013    7773 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2 +20000M
	I0702 21:26:18.974883    7773 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:26:18.974896    7773 main.go:141] libmachine: STDERR: 
	I0702 21:26:18.974907    7773 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2
	I0702 21:26:18.974915    7773 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:26:18.974948    7773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:3f:68:ce:44:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2
	I0702 21:26:18.976496    7773 main.go:141] libmachine: STDOUT: 
	I0702 21:26:18.976509    7773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:26:18.976528    7773 client.go:171] duration metric: took 230.43675ms to LocalClient.Create
	I0702 21:26:20.978668    7773 start.go:128] duration metric: took 2.258329333s to createHost
	I0702 21:26:20.978726    7773 start.go:83] releasing machines lock for "multinode-547000", held for 2.258437333s
	W0702 21:26:20.978863    7773 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:26:20.987891    7773 out.go:177] * Deleting "multinode-547000" in qemu2 ...
	W0702 21:26:21.011316    7773 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:26:21.011342    7773 start.go:728] Will try again in 5 seconds ...
	I0702 21:26:26.013505    7773 start.go:360] acquireMachinesLock for multinode-547000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:26:26.013935    7773 start.go:364] duration metric: took 341.417µs to acquireMachinesLock for "multinode-547000"
	I0702 21:26:26.014049    7773 start.go:93] Provisioning new machine with config: &{Name:multinode-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:multinode-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:26:26.014295    7773 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:26:26.026136    7773 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:26:26.078422    7773 start.go:159] libmachine.API.Create for "multinode-547000" (driver="qemu2")
	I0702 21:26:26.078476    7773 client.go:168] LocalClient.Create starting
	I0702 21:26:26.078594    7773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:26:26.078660    7773 main.go:141] libmachine: Decoding PEM data...
	I0702 21:26:26.078674    7773 main.go:141] libmachine: Parsing certificate...
	I0702 21:26:26.078729    7773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:26:26.078772    7773 main.go:141] libmachine: Decoding PEM data...
	I0702 21:26:26.078788    7773 main.go:141] libmachine: Parsing certificate...
	I0702 21:26:26.079312    7773 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:26:26.219438    7773 main.go:141] libmachine: Creating SSH key...
	I0702 21:26:26.369466    7773 main.go:141] libmachine: Creating Disk image...
	I0702 21:26:26.369471    7773 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:26:26.369648    7773 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2
	I0702 21:26:26.379099    7773 main.go:141] libmachine: STDOUT: 
	I0702 21:26:26.379122    7773 main.go:141] libmachine: STDERR: 
	I0702 21:26:26.379178    7773 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2 +20000M
	I0702 21:26:26.386981    7773 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:26:26.386993    7773 main.go:141] libmachine: STDERR: 
	I0702 21:26:26.387010    7773 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2
	I0702 21:26:26.387015    7773 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:26:26.387058    7773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:84:2a:d3:f7:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2
	I0702 21:26:26.388607    7773 main.go:141] libmachine: STDOUT: 
	I0702 21:26:26.388622    7773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:26:26.388634    7773 client.go:171] duration metric: took 310.157375ms to LocalClient.Create
	I0702 21:26:28.390846    7773 start.go:128] duration metric: took 2.376554375s to createHost
	I0702 21:26:28.390928    7773 start.go:83] releasing machines lock for "multinode-547000", held for 2.37701775s
	W0702 21:26:28.391427    7773 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:26:28.404108    7773 out.go:177] 
	W0702 21:26:28.407083    7773 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:26:28.407110    7773 out.go:239] * 
	* 
	W0702 21:26:28.409669    7773 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:26:28.421096    7773 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-547000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (66.554291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (96.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.21325ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-547000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- rollout status deployment/busybox: exit status 1 (56.529125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.510959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.067375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.435292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.170291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.7115ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.819834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.164041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.18925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.430709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.430542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.266541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.686125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.601125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.308708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.630875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (30.448125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (96.46s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-547000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.201958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (29.282875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-547000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-547000 -v 3 --alsologtostderr: exit status 83 (41.344459ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-547000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-547000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:28:05.079560    7879 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:28:05.079721    7879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:05.079725    7879 out.go:304] Setting ErrFile to fd 2...
	I0702 21:28:05.079728    7879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:05.079868    7879 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:28:05.080114    7879 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:28:05.080308    7879 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:28:05.084689    7879 out.go:177] * The control-plane node multinode-547000 host is not running: state=Stopped
	I0702 21:28:05.087709    7879 out.go:177]   To start a cluster, run: "minikube start -p multinode-547000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-547000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (30.080292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-547000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-547000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.130583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-547000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-547000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-547000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (30.351375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-547000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-547000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-547000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"multinode-547000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (30.981125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status --output json --alsologtostderr: exit status 7 (30.523542ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-547000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:28:05.285766    7891 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:28:05.285908    7891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:05.285912    7891 out.go:304] Setting ErrFile to fd 2...
	I0702 21:28:05.285915    7891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:05.286060    7891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:28:05.286191    7891 out.go:298] Setting JSON to true
	I0702 21:28:05.286204    7891 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:28:05.286265    7891 notify.go:220] Checking for updates...
	I0702 21:28:05.286411    7891 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:28:05.286417    7891 status.go:255] checking status of multinode-547000 ...
	I0702 21:28:05.286622    7891 status.go:330] multinode-547000 host status = "Stopped" (err=<nil>)
	I0702 21:28:05.286626    7891 status.go:343] host is not running, skipping remaining checks
	I0702 21:28:05.286628    7891 status.go:257] multinode-547000 status: &{Name:multinode-547000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-547000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (29.974709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 node stop m03: exit status 85 (47.388958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-547000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status: exit status 7 (30.548ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status --alsologtostderr: exit status 7 (30.2065ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:28:05.424767    7899 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:28:05.424899    7899 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:05.424904    7899 out.go:304] Setting ErrFile to fd 2...
	I0702 21:28:05.424906    7899 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:05.425062    7899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:28:05.425183    7899 out.go:298] Setting JSON to false
	I0702 21:28:05.425195    7899 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:28:05.425264    7899 notify.go:220] Checking for updates...
	I0702 21:28:05.425405    7899 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:28:05.425411    7899 status.go:255] checking status of multinode-547000 ...
	I0702 21:28:05.425622    7899 status.go:330] multinode-547000 host status = "Stopped" (err=<nil>)
	I0702 21:28:05.425626    7899 status.go:343] host is not running, skipping remaining checks
	I0702 21:28:05.425628    7899 status.go:257] multinode-547000 status: &{Name:multinode-547000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-547000 status --alsologtostderr": multinode-547000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (30.372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (54.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.3595ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:28:05.485182    7903 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:28:05.485748    7903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:05.485753    7903 out.go:304] Setting ErrFile to fd 2...
	I0702 21:28:05.485755    7903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:05.485895    7903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:28:05.486107    7903 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:28:05.486298    7903 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:28:05.490659    7903 out.go:177] 
	W0702 21:28:05.491868    7903 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0702 21:28:05.491874    7903 out.go:239] * 
	* 
	W0702 21:28:05.493842    7903 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:28:05.498635    7903 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0702 21:28:05.485182    7903 out.go:291] Setting OutFile to fd 1 ...
I0702 21:28:05.485748    7903 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:28:05.485753    7903 out.go:304] Setting ErrFile to fd 2...
I0702 21:28:05.485755    7903 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0702 21:28:05.485895    7903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
I0702 21:28:05.486107    7903 mustload.go:65] Loading cluster: multinode-547000
I0702 21:28:05.486298    7903 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0702 21:28:05.490659    7903 out.go:177] 
W0702 21:28:05.491868    7903 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0702 21:28:05.491874    7903 out.go:239] * 
* 
W0702 21:28:05.493842    7903 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0702 21:28:05.498635    7903 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-547000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr: exit status 7 (29.725666ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:28:05.531742    7905 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:28:05.531900    7905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:05.531904    7905 out.go:304] Setting ErrFile to fd 2...
	I0702 21:28:05.531906    7905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:05.532036    7905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:28:05.532162    7905 out.go:298] Setting JSON to false
	I0702 21:28:05.532174    7905 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:28:05.532233    7905 notify.go:220] Checking for updates...
	I0702 21:28:05.532369    7905 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:28:05.532375    7905 status.go:255] checking status of multinode-547000 ...
	I0702 21:28:05.532582    7905 status.go:330] multinode-547000 host status = "Stopped" (err=<nil>)
	I0702 21:28:05.532585    7905 status.go:343] host is not running, skipping remaining checks
	I0702 21:28:05.532588    7905 status.go:257] multinode-547000 status: &{Name:multinode-547000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr: exit status 7 (72.605083ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:28:06.119033    7907 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:28:06.119231    7907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:06.119236    7907 out.go:304] Setting ErrFile to fd 2...
	I0702 21:28:06.119239    7907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:06.119401    7907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:28:06.119550    7907 out.go:298] Setting JSON to false
	I0702 21:28:06.119569    7907 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:28:06.119599    7907 notify.go:220] Checking for updates...
	I0702 21:28:06.119818    7907 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:28:06.119824    7907 status.go:255] checking status of multinode-547000 ...
	I0702 21:28:06.120093    7907 status.go:330] multinode-547000 host status = "Stopped" (err=<nil>)
	I0702 21:28:06.120098    7907 status.go:343] host is not running, skipping remaining checks
	I0702 21:28:06.120101    7907 status.go:257] multinode-547000 status: &{Name:multinode-547000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr: exit status 7 (74.124458ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:28:08.138697    7911 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:28:08.138916    7911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:08.138922    7911 out.go:304] Setting ErrFile to fd 2...
	I0702 21:28:08.138925    7911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:08.139105    7911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:28:08.139264    7911 out.go:298] Setting JSON to false
	I0702 21:28:08.139280    7911 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:28:08.139307    7911 notify.go:220] Checking for updates...
	I0702 21:28:08.139523    7911 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:28:08.139530    7911 status.go:255] checking status of multinode-547000 ...
	I0702 21:28:08.139793    7911 status.go:330] multinode-547000 host status = "Stopped" (err=<nil>)
	I0702 21:28:08.139798    7911 status.go:343] host is not running, skipping remaining checks
	I0702 21:28:08.139801    7911 status.go:257] multinode-547000 status: &{Name:multinode-547000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr: exit status 7 (74.389792ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:28:09.731269    7913 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:28:09.731440    7913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:09.731445    7913 out.go:304] Setting ErrFile to fd 2...
	I0702 21:28:09.731448    7913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:09.731642    7913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:28:09.731808    7913 out.go:298] Setting JSON to false
	I0702 21:28:09.731823    7913 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:28:09.731874    7913 notify.go:220] Checking for updates...
	I0702 21:28:09.732071    7913 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:28:09.732077    7913 status.go:255] checking status of multinode-547000 ...
	I0702 21:28:09.732362    7913 status.go:330] multinode-547000 host status = "Stopped" (err=<nil>)
	I0702 21:28:09.732367    7913 status.go:343] host is not running, skipping remaining checks
	I0702 21:28:09.732370    7913 status.go:257] multinode-547000 status: &{Name:multinode-547000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr: exit status 7 (73.317333ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:28:12.820692    7915 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:28:12.821168    7915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:12.821175    7915 out.go:304] Setting ErrFile to fd 2...
	I0702 21:28:12.821179    7915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:12.821424    7915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:28:12.821623    7915 out.go:298] Setting JSON to false
	I0702 21:28:12.821638    7915 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:28:12.821705    7915 notify.go:220] Checking for updates...
	I0702 21:28:12.822211    7915 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:28:12.822221    7915 status.go:255] checking status of multinode-547000 ...
	I0702 21:28:12.822483    7915 status.go:330] multinode-547000 host status = "Stopped" (err=<nil>)
	I0702 21:28:12.822489    7915 status.go:343] host is not running, skipping remaining checks
	I0702 21:28:12.822492    7915 status.go:257] multinode-547000 status: &{Name:multinode-547000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr: exit status 7 (75.535833ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:28:18.603213    7919 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:28:18.603419    7919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:18.603425    7919 out.go:304] Setting ErrFile to fd 2...
	I0702 21:28:18.603428    7919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:18.603595    7919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:28:18.603750    7919 out.go:298] Setting JSON to false
	I0702 21:28:18.603767    7919 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:28:18.603800    7919 notify.go:220] Checking for updates...
	I0702 21:28:18.604002    7919 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:28:18.604009    7919 status.go:255] checking status of multinode-547000 ...
	I0702 21:28:18.604275    7919 status.go:330] multinode-547000 host status = "Stopped" (err=<nil>)
	I0702 21:28:18.604280    7919 status.go:343] host is not running, skipping remaining checks
	I0702 21:28:18.604283    7919 status.go:257] multinode-547000 status: &{Name:multinode-547000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr: exit status 7 (73.30925ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:28:28.156492    7926 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:28:28.156673    7926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:28.156678    7926 out.go:304] Setting ErrFile to fd 2...
	I0702 21:28:28.156682    7926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:28.156847    7926 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:28:28.157000    7926 out.go:298] Setting JSON to false
	I0702 21:28:28.157013    7926 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:28:28.157056    7926 notify.go:220] Checking for updates...
	I0702 21:28:28.157251    7926 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:28:28.157259    7926 status.go:255] checking status of multinode-547000 ...
	I0702 21:28:28.157530    7926 status.go:330] multinode-547000 host status = "Stopped" (err=<nil>)
	I0702 21:28:28.157535    7926 status.go:343] host is not running, skipping remaining checks
	I0702 21:28:28.157538    7926 status.go:257] multinode-547000 status: &{Name:multinode-547000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr: exit status 7 (74.217167ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:28:39.681058    7928 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:28:39.681294    7928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:39.681300    7928 out.go:304] Setting ErrFile to fd 2...
	I0702 21:28:39.681303    7928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:39.681494    7928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:28:39.681657    7928 out.go:298] Setting JSON to false
	I0702 21:28:39.681672    7928 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:28:39.681707    7928 notify.go:220] Checking for updates...
	I0702 21:28:39.681950    7928 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:28:39.681958    7928 status.go:255] checking status of multinode-547000 ...
	I0702 21:28:39.682231    7928 status.go:330] multinode-547000 host status = "Stopped" (err=<nil>)
	I0702 21:28:39.682236    7928 status.go:343] host is not running, skipping remaining checks
	I0702 21:28:39.682239    7928 status.go:257] multinode-547000 status: &{Name:multinode-547000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr: exit status 7 (74.056917ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:28:59.501367    7936 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:28:59.501573    7936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:59.501579    7936 out.go:304] Setting ErrFile to fd 2...
	I0702 21:28:59.501582    7936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:28:59.501739    7936 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:28:59.501912    7936 out.go:298] Setting JSON to false
	I0702 21:28:59.501930    7936 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:28:59.501977    7936 notify.go:220] Checking for updates...
	I0702 21:28:59.502231    7936 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:28:59.502239    7936 status.go:255] checking status of multinode-547000 ...
	I0702 21:28:59.502534    7936 status.go:330] multinode-547000 host status = "Stopped" (err=<nil>)
	I0702 21:28:59.502539    7936 status.go:343] host is not running, skipping remaining checks
	I0702 21:28:59.502542    7936 status.go:257] multinode-547000 status: &{Name:multinode-547000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-547000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (33.312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (54.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-547000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-547000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-547000: (3.473935583s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-547000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-547000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.221191792s)

                                                
                                                
-- stdout --
	* [multinode-547000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-547000" primary control-plane node in "multinode-547000" cluster
	* Restarting existing qemu2 VM for "multinode-547000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-547000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:29:03.100465    7965 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:29:03.100658    7965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:29:03.100663    7965 out.go:304] Setting ErrFile to fd 2...
	I0702 21:29:03.100667    7965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:29:03.100815    7965 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:29:03.102103    7965 out.go:298] Setting JSON to false
	I0702 21:29:03.121422    7965 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5312,"bootTime":1719975631,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:29:03.121495    7965 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:29:03.125153    7965 out.go:177] * [multinode-547000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:29:03.133104    7965 notify.go:220] Checking for updates...
	I0702 21:29:03.138078    7965 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:29:03.146045    7965 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:29:03.149073    7965 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:29:03.153067    7965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:29:03.156018    7965 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:29:03.159063    7965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:29:03.162351    7965 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:29:03.162404    7965 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:29:03.167049    7965 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:29:03.173978    7965 start.go:297] selected driver: qemu2
	I0702 21:29:03.173985    7965 start.go:901] validating driver "qemu2" against &{Name:multinode-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:multinode-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:29:03.174033    7965 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:29:03.176642    7965 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:29:03.176683    7965 cni.go:84] Creating CNI manager for ""
	I0702 21:29:03.176688    7965 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0702 21:29:03.176744    7965 start.go:340] cluster config:
	{Name:multinode-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-547000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:29:03.180572    7965 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:29:03.187991    7965 out.go:177] * Starting "multinode-547000" primary control-plane node in "multinode-547000" cluster
	I0702 21:29:03.192046    7965 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:29:03.192063    7965 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:29:03.192074    7965 cache.go:56] Caching tarball of preloaded images
	I0702 21:29:03.192166    7965 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:29:03.192173    7965 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:29:03.192266    7965 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/multinode-547000/config.json ...
	I0702 21:29:03.192695    7965 start.go:360] acquireMachinesLock for multinode-547000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:29:03.192736    7965 start.go:364] duration metric: took 33.333µs to acquireMachinesLock for "multinode-547000"
	I0702 21:29:03.192748    7965 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:29:03.192754    7965 fix.go:54] fixHost starting: 
	I0702 21:29:03.192901    7965 fix.go:112] recreateIfNeeded on multinode-547000: state=Stopped err=<nil>
	W0702 21:29:03.192910    7965 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:29:03.196010    7965 out.go:177] * Restarting existing qemu2 VM for "multinode-547000" ...
	I0702 21:29:03.204143    7965 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:84:2a:d3:f7:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2
	I0702 21:29:03.206555    7965 main.go:141] libmachine: STDOUT: 
	I0702 21:29:03.206579    7965 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:29:03.206611    7965 fix.go:56] duration metric: took 13.857333ms for fixHost
	I0702 21:29:03.206616    7965 start.go:83] releasing machines lock for "multinode-547000", held for 13.875458ms
	W0702 21:29:03.206624    7965 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:29:03.206661    7965 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:29:03.206667    7965 start.go:728] Will try again in 5 seconds ...
	I0702 21:29:08.208711    7965 start.go:360] acquireMachinesLock for multinode-547000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:29:08.209245    7965 start.go:364] duration metric: took 443.584µs to acquireMachinesLock for "multinode-547000"
	I0702 21:29:08.209421    7965 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:29:08.209442    7965 fix.go:54] fixHost starting: 
	I0702 21:29:08.210158    7965 fix.go:112] recreateIfNeeded on multinode-547000: state=Stopped err=<nil>
	W0702 21:29:08.210184    7965 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:29:08.213718    7965 out.go:177] * Restarting existing qemu2 VM for "multinode-547000" ...
	I0702 21:29:08.217786    7965 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:84:2a:d3:f7:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2
	I0702 21:29:08.226471    7965 main.go:141] libmachine: STDOUT: 
	I0702 21:29:08.226529    7965 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:29:08.226589    7965 fix.go:56] duration metric: took 17.148916ms for fixHost
	I0702 21:29:08.226606    7965 start.go:83] releasing machines lock for "multinode-547000", held for 17.287416ms
	W0702 21:29:08.226762    7965 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-547000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-547000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:29:08.234374    7965 out.go:177] 
	W0702 21:29:08.238582    7965 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:29:08.238612    7965 out.go:239] * 
	* 
	W0702 21:29:08.241107    7965 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:29:08.248567    7965 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-547000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-547000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (31.79425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 node delete m03: exit status 83 (39.664667ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-547000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-547000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-547000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status --alsologtostderr: exit status 7 (30.08275ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:29:08.431012    7979 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:29:08.431146    7979 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:29:08.431151    7979 out.go:304] Setting ErrFile to fd 2...
	I0702 21:29:08.431153    7979 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:29:08.431287    7979 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:29:08.431413    7979 out.go:298] Setting JSON to false
	I0702 21:29:08.431425    7979 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:29:08.431493    7979 notify.go:220] Checking for updates...
	I0702 21:29:08.431632    7979 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:29:08.431637    7979 status.go:255] checking status of multinode-547000 ...
	I0702 21:29:08.431851    7979 status.go:330] multinode-547000 host status = "Stopped" (err=<nil>)
	I0702 21:29:08.431855    7979 status.go:343] host is not running, skipping remaining checks
	I0702 21:29:08.431857    7979 status.go:257] multinode-547000 status: &{Name:multinode-547000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-547000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (30.94675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-547000 stop: (2.9327075s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status: exit status 7 (64.274792ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-547000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-547000 status --alsologtostderr: exit status 7 (32.573708ms)

                                                
                                                
-- stdout --
	multinode-547000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:29:11.492074    8005 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:29:11.492234    8005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:29:11.492238    8005 out.go:304] Setting ErrFile to fd 2...
	I0702 21:29:11.492241    8005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:29:11.492378    8005 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:29:11.492498    8005 out.go:298] Setting JSON to false
	I0702 21:29:11.492510    8005 mustload.go:65] Loading cluster: multinode-547000
	I0702 21:29:11.492576    8005 notify.go:220] Checking for updates...
	I0702 21:29:11.492713    8005 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:29:11.492719    8005 status.go:255] checking status of multinode-547000 ...
	I0702 21:29:11.492921    8005 status.go:330] multinode-547000 host status = "Stopped" (err=<nil>)
	I0702 21:29:11.492926    8005 status.go:343] host is not running, skipping remaining checks
	I0702 21:29:11.492928    8005 status.go:257] multinode-547000 status: &{Name:multinode-547000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-547000 status --alsologtostderr": multinode-547000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-547000 status --alsologtostderr": multinode-547000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (30.024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-547000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-547000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.182405625s)

                                                
                                                
-- stdout --
	* [multinode-547000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-547000" primary control-plane node in "multinode-547000" cluster
	* Restarting existing qemu2 VM for "multinode-547000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-547000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:29:11.551876    8009 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:29:11.552021    8009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:29:11.552026    8009 out.go:304] Setting ErrFile to fd 2...
	I0702 21:29:11.552028    8009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:29:11.552167    8009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:29:11.553179    8009 out.go:298] Setting JSON to false
	I0702 21:29:11.569465    8009 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5320,"bootTime":1719975631,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:29:11.569531    8009 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:29:11.574097    8009 out.go:177] * [multinode-547000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:29:11.581026    8009 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:29:11.581093    8009 notify.go:220] Checking for updates...
	I0702 21:29:11.587984    8009 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:29:11.590927    8009 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:29:11.593984    8009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:29:11.596994    8009 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:29:11.599928    8009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:29:11.603272    8009 config.go:182] Loaded profile config "multinode-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:29:11.603539    8009 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:29:11.607960    8009 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:29:11.615000    8009 start.go:297] selected driver: qemu2
	I0702 21:29:11.615008    8009 start.go:901] validating driver "qemu2" against &{Name:multinode-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:multinode-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:29:11.615071    8009 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:29:11.617256    8009 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:29:11.617311    8009 cni.go:84] Creating CNI manager for ""
	I0702 21:29:11.617315    8009 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0702 21:29:11.617350    8009 start.go:340] cluster config:
	{Name:multinode-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-547000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:29:11.620755    8009 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:29:11.627966    8009 out.go:177] * Starting "multinode-547000" primary control-plane node in "multinode-547000" cluster
	I0702 21:29:11.632004    8009 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:29:11.632019    8009 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:29:11.632027    8009 cache.go:56] Caching tarball of preloaded images
	I0702 21:29:11.632088    8009 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:29:11.632093    8009 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:29:11.632161    8009 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/multinode-547000/config.json ...
	I0702 21:29:11.632574    8009 start.go:360] acquireMachinesLock for multinode-547000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:29:11.632603    8009 start.go:364] duration metric: took 22.875µs to acquireMachinesLock for "multinode-547000"
	I0702 21:29:11.632613    8009 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:29:11.632617    8009 fix.go:54] fixHost starting: 
	I0702 21:29:11.632724    8009 fix.go:112] recreateIfNeeded on multinode-547000: state=Stopped err=<nil>
	W0702 21:29:11.632732    8009 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:29:11.640986    8009 out.go:177] * Restarting existing qemu2 VM for "multinode-547000" ...
	I0702 21:29:11.643989    8009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:84:2a:d3:f7:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2
	I0702 21:29:11.645906    8009 main.go:141] libmachine: STDOUT: 
	I0702 21:29:11.645923    8009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:29:11.645951    8009 fix.go:56] duration metric: took 13.334541ms for fixHost
	I0702 21:29:11.645956    8009 start.go:83] releasing machines lock for "multinode-547000", held for 13.348916ms
	W0702 21:29:11.645962    8009 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:29:11.645997    8009 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:29:11.646001    8009 start.go:728] Will try again in 5 seconds ...
	I0702 21:29:16.648134    8009 start.go:360] acquireMachinesLock for multinode-547000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:29:16.648669    8009 start.go:364] duration metric: took 395.916µs to acquireMachinesLock for "multinode-547000"
	I0702 21:29:16.648802    8009 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:29:16.648823    8009 fix.go:54] fixHost starting: 
	I0702 21:29:16.649667    8009 fix.go:112] recreateIfNeeded on multinode-547000: state=Stopped err=<nil>
	W0702 21:29:16.649699    8009 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:29:16.657082    8009 out.go:177] * Restarting existing qemu2 VM for "multinode-547000" ...
	I0702 21:29:16.662280    8009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:84:2a:d3:f7:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/multinode-547000/disk.qcow2
	I0702 21:29:16.671340    8009 main.go:141] libmachine: STDOUT: 
	I0702 21:29:16.671404    8009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:29:16.671491    8009 fix.go:56] duration metric: took 22.6665ms for fixHost
	I0702 21:29:16.671507    8009 start.go:83] releasing machines lock for "multinode-547000", held for 22.814292ms
	W0702 21:29:16.671697    8009 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-547000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-547000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:29:16.679061    8009 out.go:177] 
	W0702 21:29:16.683183    8009 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:29:16.683213    8009 out.go:239] * 
	* 
	W0702 21:29:16.685947    8009 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:29:16.693061    8009 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-547000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (68.918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-547000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-547000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-547000-m01 --driver=qemu2 : exit status 80 (9.872923s)

                                                
                                                
-- stdout --
	* [multinode-547000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-547000-m01" primary control-plane node in "multinode-547000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-547000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-547000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-547000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-547000-m02 --driver=qemu2 : exit status 80 (9.842662583s)

                                                
                                                
-- stdout --
	* [multinode-547000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-547000-m02" primary control-plane node in "multinode-547000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-547000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-547000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-547000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-547000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-547000: exit status 83 (83.915334ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-547000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-547000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-547000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-547000 -n multinode-547000: exit status 7 (29.888334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.94s)

                                                
                                    
x
+
TestPreload (9.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-060000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-060000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.81363875s)

                                                
                                                
-- stdout --
	* [test-preload-060000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-060000" primary control-plane node in "test-preload-060000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-060000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:29:36.844478    8078 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:29:36.844607    8078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:29:36.844612    8078 out.go:304] Setting ErrFile to fd 2...
	I0702 21:29:36.844614    8078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:29:36.844755    8078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:29:36.845802    8078 out.go:298] Setting JSON to false
	I0702 21:29:36.861992    8078 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5345,"bootTime":1719975631,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:29:36.862085    8078 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:29:36.867182    8078 out.go:177] * [test-preload-060000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:29:36.874105    8078 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:29:36.874164    8078 notify.go:220] Checking for updates...
	I0702 21:29:36.881081    8078 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:29:36.884096    8078 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:29:36.887067    8078 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:29:36.890123    8078 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:29:36.893091    8078 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:29:36.896384    8078 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:29:36.896453    8078 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:29:36.901089    8078 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:29:36.908107    8078 start.go:297] selected driver: qemu2
	I0702 21:29:36.908114    8078 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:29:36.908124    8078 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:29:36.910375    8078 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:29:36.913118    8078 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:29:36.914362    8078 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:29:36.914391    8078 cni.go:84] Creating CNI manager for ""
	I0702 21:29:36.914398    8078 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:29:36.914406    8078 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:29:36.914434    8078 start.go:340] cluster config:
	{Name:test-preload-060000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:29:36.918063    8078 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:29:36.926085    8078 out.go:177] * Starting "test-preload-060000" primary control-plane node in "test-preload-060000" cluster
	I0702 21:29:36.930079    8078 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0702 21:29:36.930154    8078 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/test-preload-060000/config.json ...
	I0702 21:29:36.930174    8078 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/test-preload-060000/config.json: {Name:mka57c36883907345542a4822d15a9026a8011fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:29:36.930177    8078 cache.go:107] acquiring lock: {Name:mkb445bb3a6c171b6d3f5c4e988865c361a51c3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:29:36.930175    8078 cache.go:107] acquiring lock: {Name:mk238b4aebfc652293d7d4096b6761d9a2ddeb9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:29:36.930210    8078 cache.go:107] acquiring lock: {Name:mk962bf86fbb1e85a4663c22c7174a91db26380b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:29:36.930246    8078 cache.go:107] acquiring lock: {Name:mk71851a9833ac239e2e45170409467b8abc0d54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:29:36.930363    8078 cache.go:107] acquiring lock: {Name:mkfef2a5a82afdb23ed5c0a3d82a84ea135d75c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:29:36.930463    8078 cache.go:107] acquiring lock: {Name:mk1ad8e114d438fcd95f950ee500deec7da74afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:29:36.930494    8078 cache.go:107] acquiring lock: {Name:mk0c8faa872bd0d6d0ec486ef5e494318fcd9b94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:29:36.930509    8078 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0702 21:29:36.930517    8078 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0702 21:29:36.930512    8078 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0702 21:29:36.930534    8078 cache.go:107] acquiring lock: {Name:mk046402257b8727d11ceccf3aeebc4bb567c8af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:29:36.930584    8078 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0702 21:29:36.930650    8078 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0702 21:29:36.930651    8078 start.go:360] acquireMachinesLock for test-preload-060000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:29:36.930702    8078 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0702 21:29:36.930773    8078 start.go:364] duration metric: took 102.625µs to acquireMachinesLock for "test-preload-060000"
	I0702 21:29:36.930808    8078 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:29:36.930789    8078 start.go:93] Provisioning new machine with config: &{Name:test-preload-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:29:36.930828    8078 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:29:36.930830    8078 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:29:36.935059    8078 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:29:36.938565    8078 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0702 21:29:36.938595    8078 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0702 21:29:36.938597    8078 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0702 21:29:36.938649    8078 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0702 21:29:36.938687    8078 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0702 21:29:36.938693    8078 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0702 21:29:36.940557    8078 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:29:36.940579    8078 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:29:36.953083    8078 start.go:159] libmachine.API.Create for "test-preload-060000" (driver="qemu2")
	I0702 21:29:36.953111    8078 client.go:168] LocalClient.Create starting
	I0702 21:29:36.953213    8078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:29:36.953246    8078 main.go:141] libmachine: Decoding PEM data...
	I0702 21:29:36.953257    8078 main.go:141] libmachine: Parsing certificate...
	I0702 21:29:36.953306    8078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:29:36.953331    8078 main.go:141] libmachine: Decoding PEM data...
	I0702 21:29:36.953341    8078 main.go:141] libmachine: Parsing certificate...
	I0702 21:29:36.953752    8078 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:29:37.089730    8078 main.go:141] libmachine: Creating SSH key...
	I0702 21:29:37.173141    8078 main.go:141] libmachine: Creating Disk image...
	I0702 21:29:37.173164    8078 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:29:37.173436    8078 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/disk.qcow2
	I0702 21:29:37.183258    8078 main.go:141] libmachine: STDOUT: 
	I0702 21:29:37.183276    8078 main.go:141] libmachine: STDERR: 
	I0702 21:29:37.183330    8078 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/disk.qcow2 +20000M
	I0702 21:29:37.192395    8078 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:29:37.192408    8078 main.go:141] libmachine: STDERR: 
	I0702 21:29:37.192423    8078 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/disk.qcow2
	I0702 21:29:37.192427    8078 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:29:37.192457    8078 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:ae:11:5d:dc:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/disk.qcow2
	I0702 21:29:37.194431    8078 main.go:141] libmachine: STDOUT: 
	I0702 21:29:37.194446    8078 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:29:37.194466    8078 client.go:171] duration metric: took 241.35475ms to LocalClient.Create
	I0702 21:29:37.354048    8078 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0702 21:29:37.356337    8078 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0702 21:29:37.362362    8078 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0702 21:29:37.370618    8078 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0702 21:29:37.399394    8078 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0702 21:29:37.407757    8078 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0702 21:29:37.456970    8078 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0702 21:29:37.457016    8078 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0702 21:29:37.556240    8078 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0702 21:29:37.556290    8078 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 626.0865ms
	I0702 21:29:37.556353    8078 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0702 21:29:37.800170    8078 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0702 21:29:37.800246    8078 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0702 21:29:38.012680    8078 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0702 21:29:38.012755    8078 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.0826005s
	I0702 21:29:38.012785    8078 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0702 21:29:39.194765    8078 start.go:128] duration metric: took 2.263959916s to createHost
	I0702 21:29:39.194812    8078 start.go:83] releasing machines lock for "test-preload-060000", held for 2.264072084s
	W0702 21:29:39.194868    8078 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:29:39.208422    8078 out.go:177] * Deleting "test-preload-060000" in qemu2 ...
	W0702 21:29:39.231708    8078 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:29:39.231746    8078 start.go:728] Will try again in 5 seconds ...
	I0702 21:29:40.085231    8078 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0702 21:29:40.085286    8078 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.154914s
	I0702 21:29:40.085316    8078 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0702 21:29:40.092716    8078 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0702 21:29:40.092752    8078 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.162461833s
	I0702 21:29:40.092773    8078 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0702 21:29:42.350173    8078 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0702 21:29:42.350219    8078 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.420107125s
	I0702 21:29:42.350248    8078 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0702 21:29:42.358968    8078 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0702 21:29:42.359025    8078 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.42871425s
	I0702 21:29:42.359045    8078 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0702 21:29:42.525193    8078 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0702 21:29:42.525246    8078 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.595181417s
	I0702 21:29:42.525273    8078 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0702 21:29:44.084835    8078 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0702 21:29:44.084883    8078 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 7.154488334s
	I0702 21:29:44.084910    8078 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0702 21:29:44.084936    8078 cache.go:87] Successfully saved all images to host disk.
	I0702 21:29:44.233837    8078 start.go:360] acquireMachinesLock for test-preload-060000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:29:44.234225    8078 start.go:364] duration metric: took 324.25µs to acquireMachinesLock for "test-preload-060000"
	I0702 21:29:44.234445    8078 start.go:93] Provisioning new machine with config: &{Name:test-preload-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:29:44.234678    8078 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:29:44.245255    8078 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:29:44.294363    8078 start.go:159] libmachine.API.Create for "test-preload-060000" (driver="qemu2")
	I0702 21:29:44.294422    8078 client.go:168] LocalClient.Create starting
	I0702 21:29:44.294536    8078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:29:44.294600    8078 main.go:141] libmachine: Decoding PEM data...
	I0702 21:29:44.294618    8078 main.go:141] libmachine: Parsing certificate...
	I0702 21:29:44.294674    8078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:29:44.294718    8078 main.go:141] libmachine: Decoding PEM data...
	I0702 21:29:44.294733    8078 main.go:141] libmachine: Parsing certificate...
	I0702 21:29:44.295530    8078 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:29:44.434275    8078 main.go:141] libmachine: Creating SSH key...
	I0702 21:29:44.568262    8078 main.go:141] libmachine: Creating Disk image...
	I0702 21:29:44.568270    8078 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:29:44.568437    8078 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/disk.qcow2
	I0702 21:29:44.578126    8078 main.go:141] libmachine: STDOUT: 
	I0702 21:29:44.578144    8078 main.go:141] libmachine: STDERR: 
	I0702 21:29:44.578187    8078 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/disk.qcow2 +20000M
	I0702 21:29:44.586102    8078 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:29:44.586131    8078 main.go:141] libmachine: STDERR: 
	I0702 21:29:44.586144    8078 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/disk.qcow2
	I0702 21:29:44.586152    8078 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:29:44.586198    8078 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:dc:d4:89:00:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/test-preload-060000/disk.qcow2
	I0702 21:29:44.587966    8078 main.go:141] libmachine: STDOUT: 
	I0702 21:29:44.587982    8078 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:29:44.587997    8078 client.go:171] duration metric: took 293.574292ms to LocalClient.Create
	I0702 21:29:46.590193    8078 start.go:128] duration metric: took 2.355518208s to createHost
	I0702 21:29:46.590325    8078 start.go:83] releasing machines lock for "test-preload-060000", held for 2.356091458s
	W0702 21:29:46.590805    8078 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:29:46.599381    8078 out.go:177] 
	W0702 21:29:46.604501    8078 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:29:46.604531    8078 out.go:239] * 
	* 
	W0702 21:29:46.607277    8078 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:29:46.614362    8078 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-060000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-02 21:29:46.633244 -0700 PDT m=+675.341010376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-060000 -n test-preload-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-060000 -n test-preload-060000: exit status 7 (68.293959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-060000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-060000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-060000
--- FAIL: TestPreload (9.96s)

                                                
                                    
x
+
TestScheduledStopUnix (10.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-589000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-589000 --memory=2048 --driver=qemu2 : exit status 80 (9.879842875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-589000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-589000" primary control-plane node in "scheduled-stop-589000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-589000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-589000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-589000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-589000" primary control-plane node in "scheduled-stop-589000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-589000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-589000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-02 21:29:56.654964 -0700 PDT m=+685.362931001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-589000 -n scheduled-stop-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-589000 -n scheduled-stop-589000: exit status 7 (70.203625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-589000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-589000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-589000
--- FAIL: TestScheduledStopUnix (10.02s)

                                                
                                    
x
+
TestSkaffold (11.95s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1819962657 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-213000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-213000 --memory=2600 --driver=qemu2 : exit status 80 (9.731707625s)

                                                
                                                
-- stdout --
	* [skaffold-213000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-213000" primary control-plane node in "skaffold-213000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-213000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-213000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-213000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-213000" primary control-plane node in "skaffold-213000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-213000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-213000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-02 21:30:08.60555 -0700 PDT m=+697.313757376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-213000 -n skaffold-213000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-213000 -n skaffold-213000: exit status 7 (62.050417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-213000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-213000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-213000
--- FAIL: TestSkaffold (11.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (606.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1748392557 start -p running-upgrade-908000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1748392557 start -p running-upgrade-908000 --memory=2200 --vm-driver=qemu2 : (51.62919975s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-908000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-908000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.296108416s)

                                                
                                                
-- stdout --
	* [running-upgrade-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-908000" primary control-plane node in "running-upgrade-908000" cluster
	* Updating the running qemu2 "running-upgrade-908000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:31:01.534918    8323 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:31:01.535041    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:31:01.535045    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:31:01.535047    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:31:01.535186    8323 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:31:01.536262    8323 out.go:298] Setting JSON to false
	I0702 21:31:01.552475    8323 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5430,"bootTime":1719975631,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:31:01.552548    8323 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:31:01.557889    8323 out.go:177] * [running-upgrade-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:31:01.564854    8323 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:31:01.564908    8323 notify.go:220] Checking for updates...
	I0702 21:31:01.571833    8323 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:31:01.574809    8323 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:31:01.577840    8323 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:31:01.580804    8323 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:31:01.583917    8323 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:31:01.587116    8323 config.go:182] Loaded profile config "running-upgrade-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:31:01.590728    8323 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0702 21:31:01.593846    8323 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:31:01.598812    8323 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:31:01.605802    8323 start.go:297] selected driver: qemu2
	I0702 21:31:01.605810    8323 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51204 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0702 21:31:01.605857    8323 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:31:01.608136    8323 cni.go:84] Creating CNI manager for ""
	I0702 21:31:01.608152    8323 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:31:01.608185    8323 start.go:340] cluster config:
	{Name:running-upgrade-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51204 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0702 21:31:01.608238    8323 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:31:01.619823    8323 out.go:177] * Starting "running-upgrade-908000" primary control-plane node in "running-upgrade-908000" cluster
	I0702 21:31:01.623846    8323 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0702 21:31:01.623861    8323 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0702 21:31:01.623869    8323 cache.go:56] Caching tarball of preloaded images
	I0702 21:31:01.623938    8323 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:31:01.623944    8323 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0702 21:31:01.623998    8323 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/config.json ...
	I0702 21:31:01.624447    8323 start.go:360] acquireMachinesLock for running-upgrade-908000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:31:01.624479    8323 start.go:364] duration metric: took 24.917µs to acquireMachinesLock for "running-upgrade-908000"
	I0702 21:31:01.624489    8323 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:31:01.624495    8323 fix.go:54] fixHost starting: 
	I0702 21:31:01.625196    8323 fix.go:112] recreateIfNeeded on running-upgrade-908000: state=Running err=<nil>
	W0702 21:31:01.625206    8323 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:31:01.633827    8323 out.go:177] * Updating the running qemu2 "running-upgrade-908000" VM ...
	I0702 21:31:01.637792    8323 machine.go:94] provisionDockerMachine start ...
	I0702 21:31:01.637845    8323 main.go:141] libmachine: Using SSH client type: native
	I0702 21:31:01.637995    8323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a568e0] 0x102a59140 <nil>  [] 0s} localhost 51172 <nil> <nil>}
	I0702 21:31:01.638002    8323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0702 21:31:01.709239    8323 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-908000
	
	I0702 21:31:01.709257    8323 buildroot.go:166] provisioning hostname "running-upgrade-908000"
	I0702 21:31:01.709326    8323 main.go:141] libmachine: Using SSH client type: native
	I0702 21:31:01.709467    8323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a568e0] 0x102a59140 <nil>  [] 0s} localhost 51172 <nil> <nil>}
	I0702 21:31:01.709476    8323 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-908000 && echo "running-upgrade-908000" | sudo tee /etc/hostname
	I0702 21:31:01.783714    8323 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-908000
	
	I0702 21:31:01.783780    8323 main.go:141] libmachine: Using SSH client type: native
	I0702 21:31:01.783892    8323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a568e0] 0x102a59140 <nil>  [] 0s} localhost 51172 <nil> <nil>}
	I0702 21:31:01.783899    8323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-908000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-908000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-908000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0702 21:31:01.850898    8323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0702 21:31:01.850912    8323 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19184-6175/.minikube CaCertPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19184-6175/.minikube}
	I0702 21:31:01.850928    8323 buildroot.go:174] setting up certificates
	I0702 21:31:01.850932    8323 provision.go:84] configureAuth start
	I0702 21:31:01.850937    8323 provision.go:143] copyHostCerts
	I0702 21:31:01.851032    8323 exec_runner.go:144] found /Users/jenkins/minikube-integration/19184-6175/.minikube/key.pem, removing ...
	I0702 21:31:01.851040    8323 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19184-6175/.minikube/key.pem
	I0702 21:31:01.851164    8323 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19184-6175/.minikube/key.pem (1675 bytes)
	I0702 21:31:01.851342    8323 exec_runner.go:144] found /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.pem, removing ...
	I0702 21:31:01.851346    8323 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.pem
	I0702 21:31:01.851409    8323 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.pem (1078 bytes)
	I0702 21:31:01.851524    8323 exec_runner.go:144] found /Users/jenkins/minikube-integration/19184-6175/.minikube/cert.pem, removing ...
	I0702 21:31:01.851527    8323 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19184-6175/.minikube/cert.pem
	I0702 21:31:01.851596    8323 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19184-6175/.minikube/cert.pem (1123 bytes)
	I0702 21:31:01.851673    8323 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-908000 san=[127.0.0.1 localhost minikube running-upgrade-908000]
	I0702 21:31:01.910315    8323 provision.go:177] copyRemoteCerts
	I0702 21:31:01.910353    8323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0702 21:31:01.910360    8323 sshutil.go:53] new ssh client: &{IP:localhost Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	I0702 21:31:01.948052    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0702 21:31:01.954808    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0702 21:31:01.961619    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0702 21:31:01.968447    8323 provision.go:87] duration metric: took 117.508459ms to configureAuth
	I0702 21:31:01.968456    8323 buildroot.go:189] setting minikube options for container-runtime
	I0702 21:31:01.968558    8323 config.go:182] Loaded profile config "running-upgrade-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:31:01.968591    8323 main.go:141] libmachine: Using SSH client type: native
	I0702 21:31:01.968675    8323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a568e0] 0x102a59140 <nil>  [] 0s} localhost 51172 <nil> <nil>}
	I0702 21:31:01.968680    8323 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0702 21:31:02.038645    8323 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0702 21:31:02.038656    8323 buildroot.go:70] root file system type: tmpfs
	I0702 21:31:02.038715    8323 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0702 21:31:02.038759    8323 main.go:141] libmachine: Using SSH client type: native
	I0702 21:31:02.038874    8323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a568e0] 0x102a59140 <nil>  [] 0s} localhost 51172 <nil> <nil>}
	I0702 21:31:02.038913    8323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0702 21:31:02.111045    8323 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0702 21:31:02.111112    8323 main.go:141] libmachine: Using SSH client type: native
	I0702 21:31:02.111239    8323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a568e0] 0x102a59140 <nil>  [] 0s} localhost 51172 <nil> <nil>}
	I0702 21:31:02.111247    8323 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0702 21:31:02.181400    8323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0702 21:31:02.181410    8323 machine.go:97] duration metric: took 543.618375ms to provisionDockerMachine
	I0702 21:31:02.181415    8323 start.go:293] postStartSetup for "running-upgrade-908000" (driver="qemu2")
	I0702 21:31:02.181422    8323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0702 21:31:02.181494    8323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0702 21:31:02.181503    8323 sshutil.go:53] new ssh client: &{IP:localhost Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	I0702 21:31:02.217268    8323 ssh_runner.go:195] Run: cat /etc/os-release
	I0702 21:31:02.218549    8323 info.go:137] Remote host: Buildroot 2021.02.12
	I0702 21:31:02.218556    8323 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19184-6175/.minikube/addons for local assets ...
	I0702 21:31:02.218628    8323 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19184-6175/.minikube/files for local assets ...
	I0702 21:31:02.218773    8323 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19184-6175/.minikube/files/etc/ssl/certs/66692.pem -> 66692.pem in /etc/ssl/certs
	I0702 21:31:02.218897    8323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0702 21:31:02.221482    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/files/etc/ssl/certs/66692.pem --> /etc/ssl/certs/66692.pem (1708 bytes)
	I0702 21:31:02.228588    8323 start.go:296] duration metric: took 47.168958ms for postStartSetup
	I0702 21:31:02.228601    8323 fix.go:56] duration metric: took 604.120375ms for fixHost
	I0702 21:31:02.228628    8323 main.go:141] libmachine: Using SSH client type: native
	I0702 21:31:02.228724    8323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a568e0] 0x102a59140 <nil>  [] 0s} localhost 51172 <nil> <nil>}
	I0702 21:31:02.228728    8323 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0702 21:31:02.295972    8323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719981061.895580430
	
	I0702 21:31:02.295983    8323 fix.go:216] guest clock: 1719981061.895580430
	I0702 21:31:02.295986    8323 fix.go:229] Guest: 2024-07-02 21:31:01.89558043 -0700 PDT Remote: 2024-07-02 21:31:02.228603 -0700 PDT m=+0.713509917 (delta=-333.02257ms)
	I0702 21:31:02.295998    8323 fix.go:200] guest clock delta is within tolerance: -333.02257ms
	I0702 21:31:02.296001    8323 start.go:83] releasing machines lock for "running-upgrade-908000", held for 671.531042ms
	I0702 21:31:02.296060    8323 ssh_runner.go:195] Run: cat /version.json
	I0702 21:31:02.296071    8323 sshutil.go:53] new ssh client: &{IP:localhost Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	I0702 21:31:02.296061    8323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0702 21:31:02.296099    8323 sshutil.go:53] new ssh client: &{IP:localhost Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	W0702 21:31:02.296627    8323 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51172: connect: connection refused
	I0702 21:31:02.296674    8323 retry.go:31] will retry after 268.480605ms: dial tcp [::1]:51172: connect: connection refused
	W0702 21:31:02.613584    8323 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0702 21:31:02.613715    8323 ssh_runner.go:195] Run: systemctl --version
	I0702 21:31:02.617149    8323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0702 21:31:02.620243    8323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0702 21:31:02.620305    8323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0702 21:31:02.625612    8323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0702 21:31:02.633199    8323 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0702 21:31:02.633213    8323 start.go:494] detecting cgroup driver to use...
	I0702 21:31:02.633414    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0702 21:31:02.641688    8323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0702 21:31:02.645415    8323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0702 21:31:02.649235    8323 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0702 21:31:02.649257    8323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0702 21:31:02.652969    8323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0702 21:31:02.656390    8323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0702 21:31:02.659386    8323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0702 21:31:02.662133    8323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0702 21:31:02.665248    8323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0702 21:31:02.668262    8323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0702 21:31:02.671124    8323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0702 21:31:02.673856    8323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0702 21:31:02.677614    8323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0702 21:31:02.680421    8323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:31:02.768762    8323 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0702 21:31:02.775320    8323 start.go:494] detecting cgroup driver to use...
	I0702 21:31:02.775396    8323 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0702 21:31:02.788027    8323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0702 21:31:02.792230    8323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0702 21:31:02.798188    8323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0702 21:31:02.803135    8323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0702 21:31:02.807397    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0702 21:31:02.813035    8323 ssh_runner.go:195] Run: which cri-dockerd
	I0702 21:31:02.814368    8323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0702 21:31:02.817325    8323 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0702 21:31:02.822219    8323 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0702 21:31:02.912828    8323 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0702 21:31:03.004930    8323 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0702 21:31:03.004998    8323 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0702 21:31:03.010190    8323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:31:03.093600    8323 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0702 21:31:16.616237    8323 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.522891625s)
	I0702 21:31:16.616303    8323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0702 21:31:16.621187    8323 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0702 21:31:16.628742    8323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0702 21:31:16.634529    8323 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0702 21:31:16.716871    8323 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0702 21:31:16.787128    8323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:31:16.869754    8323 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0702 21:31:16.876298    8323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0702 21:31:16.881014    8323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:31:16.962130    8323 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0702 21:31:17.001355    8323 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0702 21:31:17.001459    8323 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0702 21:31:17.003673    8323 start.go:562] Will wait 60s for crictl version
	I0702 21:31:17.003720    8323 ssh_runner.go:195] Run: which crictl
	I0702 21:31:17.005091    8323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0702 21:31:17.023598    8323 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0702 21:31:17.023660    8323 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0702 21:31:17.036796    8323 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0702 21:31:17.056728    8323 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0702 21:31:17.056857    8323 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0702 21:31:17.058160    8323 kubeadm.go:877] updating cluster {Name:running-upgrade-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51204 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0702 21:31:17.058205    8323 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0702 21:31:17.058242    8323 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0702 21:31:17.068924    8323 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0702 21:31:17.068932    8323 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0702 21:31:17.068979    8323 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0702 21:31:17.072362    8323 ssh_runner.go:195] Run: which lz4
	I0702 21:31:17.073488    8323 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0702 21:31:17.074734    8323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0702 21:31:17.074743    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0702 21:31:18.021887    8323 docker.go:649] duration metric: took 948.44425ms to copy over tarball
	I0702 21:31:18.021947    8323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0702 21:31:19.275475    8323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.253538708s)
	I0702 21:31:19.275488    8323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0702 21:31:19.291816    8323 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0702 21:31:19.294990    8323 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0702 21:31:19.299973    8323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:31:19.369135    8323 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0702 21:31:20.585285    8323 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.2161565s)
	I0702 21:31:20.585390    8323 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0702 21:31:20.598598    8323 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0702 21:31:20.598609    8323 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0702 21:31:20.598614    8323 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0702 21:31:20.602217    8323 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:31:20.603837    8323 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:31:20.605942    8323 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:31:20.606140    8323 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:31:20.608246    8323 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:31:20.608279    8323 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:31:20.609273    8323 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:31:20.609536    8323 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:31:20.610673    8323 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0702 21:31:20.611190    8323 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:31:20.611887    8323 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0702 21:31:20.611971    8323 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:31:20.613059    8323 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:31:20.613063    8323 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0702 21:31:20.613768    8323 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0702 21:31:20.614365    8323 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:31:20.983738    8323 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:31:20.997659    8323 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0702 21:31:20.997689    8323 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:31:20.997742    8323 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:31:21.009072    8323 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0702 21:31:21.024850    8323 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:31:21.026934    8323 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:31:21.032995    8323 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:31:21.036527    8323 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0702 21:31:21.036545    8323 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:31:21.036592    8323 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:31:21.039496    8323 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0702 21:31:21.046700    8323 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0702 21:31:21.046721    8323 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:31:21.046777    8323 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:31:21.048485    8323 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0702 21:31:21.048499    8323 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:31:21.048527    8323 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:31:21.058689    8323 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0702 21:31:21.060848    8323 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0702 21:31:21.065364    8323 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0702 21:31:21.065382    8323 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0702 21:31:21.065429    8323 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0702 21:31:21.076386    8323 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0702 21:31:21.076402    8323 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0702 21:31:21.082740    8323 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0702 21:31:21.082758    8323 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0702 21:31:21.082761    8323 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0702 21:31:21.082804    8323 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0702 21:31:21.082863    8323 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0702 21:31:21.092867    8323 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0702 21:31:21.092879    8323 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0702 21:31:21.092892    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0702 21:31:21.092976    8323 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0702 21:31:21.094711    8323 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0702 21:31:21.094721    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0702 21:31:21.112931    8323 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0702 21:31:21.113084    8323 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:31:21.113362    8323 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0702 21:31:21.113370    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0702 21:31:21.161360    8323 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0702 21:31:21.161402    8323 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0702 21:31:21.161423    8323 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:31:21.161478    8323 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:31:21.191313    8323 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0702 21:31:21.191449    8323 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0702 21:31:21.203854    8323 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0702 21:31:21.203882    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0702 21:31:21.219365    8323 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0702 21:31:21.219492    8323 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:31:21.261267    8323 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0702 21:31:21.261298    8323 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:31:21.261359    8323 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:31:21.298726    8323 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0702 21:31:21.298739    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0702 21:31:22.271397    8323 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.010021584s)
	I0702 21:31:22.271444    8323 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0702 21:31:22.271458    8323 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0702 21:31:22.271501    8323 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0702 21:31:22.271591    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0702 21:31:22.271820    8323 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0702 21:31:22.445869    8323 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0702 21:31:22.445907    8323 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0702 21:31:22.445935    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0702 21:31:22.476983    8323 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0702 21:31:22.476998    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0702 21:31:22.715728    8323 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0702 21:31:22.715768    8323 cache_images.go:92] duration metric: took 2.117189792s to LoadCachedImages
	W0702 21:31:22.715806    8323 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0702 21:31:22.715811    8323 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0702 21:31:22.715871    8323 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-908000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0702 21:31:22.715937    8323 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0702 21:31:22.729567    8323 cni.go:84] Creating CNI manager for ""
	I0702 21:31:22.729578    8323 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:31:22.729583    8323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0702 21:31:22.729591    8323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-908000 NodeName:running-upgrade-908000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0702 21:31:22.729656    8323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-908000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0702 21:31:22.729712    8323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0702 21:31:22.733167    8323 binaries.go:44] Found k8s binaries, skipping transfer
	I0702 21:31:22.733196    8323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0702 21:31:22.736447    8323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0702 21:31:22.741523    8323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0702 21:31:22.746765    8323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0702 21:31:22.752102    8323 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0702 21:31:22.753446    8323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:31:22.838903    8323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0702 21:31:22.843676    8323 certs.go:68] Setting up /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000 for IP: 10.0.2.15
	I0702 21:31:22.843696    8323 certs.go:194] generating shared ca certs ...
	I0702 21:31:22.843706    8323 certs.go:226] acquiring lock for ca certs: {Name:mk1563fd1929f66ff1d36559bceb7dd892d19aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:31:22.843959    8323 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.key
	I0702 21:31:22.844024    8323 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/proxy-client-ca.key
	I0702 21:31:22.844029    8323 certs.go:256] generating profile certs ...
	I0702 21:31:22.844107    8323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/client.key
	I0702 21:31:22.844119    8323 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/apiserver.key.bcc26e67
	I0702 21:31:22.844134    8323 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/apiserver.crt.bcc26e67 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0702 21:31:22.890110    8323 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/apiserver.crt.bcc26e67 ...
	I0702 21:31:22.890125    8323 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/apiserver.crt.bcc26e67: {Name:mk7f7ff63589c4a913e2feb64c3fca4178758723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:31:22.890401    8323 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/apiserver.key.bcc26e67 ...
	I0702 21:31:22.890406    8323 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/apiserver.key.bcc26e67: {Name:mkccf8ff640ec64fade757020c53a017cd02ff26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:31:22.890527    8323 certs.go:381] copying /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/apiserver.crt.bcc26e67 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/apiserver.crt
	I0702 21:31:22.890659    8323 certs.go:385] copying /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/apiserver.key.bcc26e67 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/apiserver.key
	I0702 21:31:22.890817    8323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/proxy-client.key
	I0702 21:31:22.890945    8323 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/6669.pem (1338 bytes)
	W0702 21:31:22.890973    8323 certs.go:480] ignoring /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/6669_empty.pem, impossibly tiny 0 bytes
	I0702 21:31:22.890978    8323 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca-key.pem (1675 bytes)
	I0702 21:31:22.891005    8323 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem (1078 bytes)
	I0702 21:31:22.891029    8323 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem (1123 bytes)
	I0702 21:31:22.891053    8323 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/key.pem (1675 bytes)
	I0702 21:31:22.891105    8323 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/files/etc/ssl/certs/66692.pem (1708 bytes)
	I0702 21:31:22.891451    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0702 21:31:22.898676    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0702 21:31:22.905558    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0702 21:31:22.913172    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0702 21:31:22.921125    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0702 21:31:22.928489    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0702 21:31:22.935511    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0702 21:31:22.942431    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0702 21:31:22.949315    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/files/etc/ssl/certs/66692.pem --> /usr/share/ca-certificates/66692.pem (1708 bytes)
	I0702 21:31:22.956704    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0702 21:31:22.963478    8323 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/6669.pem --> /usr/share/ca-certificates/6669.pem (1338 bytes)
	I0702 21:31:22.970200    8323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0702 21:31:22.975153    8323 ssh_runner.go:195] Run: openssl version
	I0702 21:31:22.976971    8323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/66692.pem && ln -fs /usr/share/ca-certificates/66692.pem /etc/ssl/certs/66692.pem"
	I0702 21:31:22.980366    8323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/66692.pem
	I0702 21:31:22.981855    8323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 04:19 /usr/share/ca-certificates/66692.pem
	I0702 21:31:22.981870    8323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/66692.pem
	I0702 21:31:22.983587    8323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/66692.pem /etc/ssl/certs/3ec20f2e.0"
	I0702 21:31:22.986260    8323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0702 21:31:22.989612    8323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0702 21:31:22.991015    8323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 04:30 /usr/share/ca-certificates/minikubeCA.pem
	I0702 21:31:22.991036    8323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0702 21:31:22.992579    8323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0702 21:31:22.995647    8323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6669.pem && ln -fs /usr/share/ca-certificates/6669.pem /etc/ssl/certs/6669.pem"
	I0702 21:31:22.998359    8323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6669.pem
	I0702 21:31:22.999882    8323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 04:19 /usr/share/ca-certificates/6669.pem
	I0702 21:31:22.999903    8323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6669.pem
	I0702 21:31:23.001738    8323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6669.pem /etc/ssl/certs/51391683.0"
	I0702 21:31:23.004815    8323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0702 21:31:23.006552    8323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0702 21:31:23.008221    8323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0702 21:31:23.010016    8323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0702 21:31:23.011981    8323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0702 21:31:23.013967    8323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0702 21:31:23.016029    8323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0702 21:31:23.018034    8323 kubeadm.go:391] StartCluster: {Name:running-upgrade-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51204 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0702 21:31:23.018103    8323 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0702 21:31:23.028893    8323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0702 21:31:23.032755    8323 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0702 21:31:23.032764    8323 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0702 21:31:23.032767    8323 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0702 21:31:23.032788    8323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0702 21:31:23.035870    8323 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0702 21:31:23.035905    8323 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-908000" does not appear in /Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:31:23.035920    8323 kubeconfig.go:62] /Users/jenkins/minikube-integration/19184-6175/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-908000" cluster setting kubeconfig missing "running-upgrade-908000" context setting]
	I0702 21:31:23.036090    8323 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/kubeconfig: {Name:mk27cb7c8451cb331bdc98ce6310b0b3aba92b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:31:23.036966    8323 kapi.go:59] client config for running-upgrade-908000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/client.key", CAFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103de5a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0702 21:31:23.037781    8323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0702 21:31:23.041036    8323 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-908000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0702 21:31:23.041043    8323 kubeadm.go:1154] stopping kube-system containers ...
	I0702 21:31:23.041082    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0702 21:31:23.052481    8323 docker.go:483] Stopping containers: [bd0edc3cae5a f7cd14e8e84e 54ec470b077b 1ee4e71e2cd8 5af0771bc4d8 2eba3151847e 6c69b3f6dd41 36785c6f1cd4 e396fa228dc7 d3b08a39b008 08d63d60a70f b87f76330109]
	I0702 21:31:23.052540    8323 ssh_runner.go:195] Run: docker stop bd0edc3cae5a f7cd14e8e84e 54ec470b077b 1ee4e71e2cd8 5af0771bc4d8 2eba3151847e 6c69b3f6dd41 36785c6f1cd4 e396fa228dc7 d3b08a39b008 08d63d60a70f b87f76330109
	I0702 21:31:23.063567    8323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0702 21:31:23.162292    8323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0702 21:31:23.166312    8323 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Jul  3 04:30 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul  3 04:30 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul  3 04:30 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul  3 04:30 /etc/kubernetes/scheduler.conf
	
	I0702 21:31:23.166342    8323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/admin.conf
	I0702 21:31:23.169599    8323 kubeadm.go:162] "https://control-plane.minikube.internal:51204" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0702 21:31:23.169629    8323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0702 21:31:23.173044    8323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/kubelet.conf
	I0702 21:31:23.176425    8323 kubeadm.go:162] "https://control-plane.minikube.internal:51204" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0702 21:31:23.176447    8323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0702 21:31:23.179483    8323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/controller-manager.conf
	I0702 21:31:23.182118    8323 kubeadm.go:162] "https://control-plane.minikube.internal:51204" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0702 21:31:23.182146    8323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0702 21:31:23.185022    8323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/scheduler.conf
	I0702 21:31:23.187979    8323 kubeadm.go:162] "https://control-plane.minikube.internal:51204" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0702 21:31:23.187999    8323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0702 21:31:23.190404    8323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0702 21:31:23.193488    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:31:23.214952    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:31:24.081311    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:31:24.280697    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:31:24.303077    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:31:24.323069    8323 api_server.go:52] waiting for apiserver process to appear ...
	I0702 21:31:24.323145    8323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0702 21:31:24.825320    8323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0702 21:31:25.325195    8323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0702 21:31:25.329390    8323 api_server.go:72] duration metric: took 1.006344333s to wait for apiserver process to appear ...
	I0702 21:31:25.329398    8323 api_server.go:88] waiting for apiserver healthz status ...
	I0702 21:31:25.329413    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:31:30.331420    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:31:30.331478    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:31:35.331567    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:31:35.331607    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:31:40.332283    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:31:40.332327    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:31:45.332898    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:31:45.332948    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:31:50.333644    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:31:50.333717    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:31:55.334759    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:31:55.334785    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:32:00.335527    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:32:00.335591    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:32:05.337323    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:32:05.337350    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:32:10.339257    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:32:10.339341    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:32:15.341226    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:32:15.341241    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:32:20.343453    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:32:20.343539    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:32:25.346194    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:32:25.346638    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:32:25.385917    8323 logs.go:276] 2 containers: [b4d169a6fe7c 54ec470b077b]
	I0702 21:32:25.386064    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:32:25.408229    8323 logs.go:276] 2 containers: [e47ea0c109b8 1ee4e71e2cd8]
	I0702 21:32:25.408339    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:32:25.422703    8323 logs.go:276] 1 containers: [b9a52daacedd]
	I0702 21:32:25.422765    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:32:25.435470    8323 logs.go:276] 2 containers: [a8f9a2711a23 36785c6f1cd4]
	I0702 21:32:25.435547    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:32:25.446261    8323 logs.go:276] 1 containers: [1befadafa2eb]
	I0702 21:32:25.446331    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:32:25.456465    8323 logs.go:276] 2 containers: [9ddff714a977 6c69b3f6dd41]
	I0702 21:32:25.456530    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:32:25.472250    8323 logs.go:276] 0 containers: []
	W0702 21:32:25.472261    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:32:25.472319    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:32:25.482732    8323 logs.go:276] 1 containers: [d6a8b9012496]
	I0702 21:32:25.482750    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:32:25.482755    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:32:25.487234    8323 logs.go:123] Gathering logs for etcd [1ee4e71e2cd8] ...
	I0702 21:32:25.487243    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ee4e71e2cd8"
	I0702 21:32:25.502064    8323 logs.go:123] Gathering logs for kube-scheduler [a8f9a2711a23] ...
	I0702 21:32:25.502073    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8f9a2711a23"
	I0702 21:32:25.514938    8323 logs.go:123] Gathering logs for kube-scheduler [36785c6f1cd4] ...
	I0702 21:32:25.514947    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36785c6f1cd4"
	I0702 21:32:25.530419    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:32:25.530430    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:32:25.604432    8323 logs.go:123] Gathering logs for coredns [b9a52daacedd] ...
	I0702 21:32:25.604445    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a52daacedd"
	I0702 21:32:25.615966    8323 logs.go:123] Gathering logs for kube-controller-manager [6c69b3f6dd41] ...
	I0702 21:32:25.615981    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c69b3f6dd41"
	I0702 21:32:25.627265    8323 logs.go:123] Gathering logs for storage-provisioner [d6a8b9012496] ...
	I0702 21:32:25.627278    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a8b9012496"
	I0702 21:32:25.638266    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:32:25.638276    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:32:25.650236    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:32:25.650249    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:32:25.687543    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:32:25.687637    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:32:25.688610    8323 logs.go:123] Gathering logs for kube-apiserver [54ec470b077b] ...
	I0702 21:32:25.688616    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ec470b077b"
	I0702 21:32:25.710021    8323 logs.go:123] Gathering logs for kube-proxy [1befadafa2eb] ...
	I0702 21:32:25.710031    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1befadafa2eb"
	I0702 21:32:25.721799    8323 logs.go:123] Gathering logs for kube-controller-manager [9ddff714a977] ...
	I0702 21:32:25.721812    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ddff714a977"
	I0702 21:32:25.743152    8323 logs.go:123] Gathering logs for kube-apiserver [b4d169a6fe7c] ...
	I0702 21:32:25.743163    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4d169a6fe7c"
	I0702 21:32:25.759202    8323 logs.go:123] Gathering logs for etcd [e47ea0c109b8] ...
	I0702 21:32:25.759213    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e47ea0c109b8"
	I0702 21:32:25.774381    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:32:25.774393    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:32:25.799546    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:32:25.799554    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:32:25.799577    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:32:25.799581    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:32:25.799584    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:32:25.799599    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:32:25.799602    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:32:35.801817    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:32:40.804529    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:32:40.804981    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:32:40.846138    8323 logs.go:276] 2 containers: [b4d169a6fe7c 54ec470b077b]
	I0702 21:32:40.846276    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:32:40.868502    8323 logs.go:276] 2 containers: [e47ea0c109b8 1ee4e71e2cd8]
	I0702 21:32:40.868622    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:32:40.884092    8323 logs.go:276] 1 containers: [b9a52daacedd]
	I0702 21:32:40.884170    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:32:40.896552    8323 logs.go:276] 2 containers: [a8f9a2711a23 36785c6f1cd4]
	I0702 21:32:40.896624    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:32:40.907563    8323 logs.go:276] 1 containers: [1befadafa2eb]
	I0702 21:32:40.907624    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:32:40.918234    8323 logs.go:276] 2 containers: [9ddff714a977 6c69b3f6dd41]
	I0702 21:32:40.918305    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:32:40.928701    8323 logs.go:276] 0 containers: []
	W0702 21:32:40.928714    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:32:40.928764    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:32:40.938821    8323 logs.go:276] 1 containers: [d6a8b9012496]
	I0702 21:32:40.938837    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:32:40.938843    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:32:40.976207    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:32:40.976298    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:32:40.977249    8323 logs.go:123] Gathering logs for kube-apiserver [b4d169a6fe7c] ...
	I0702 21:32:40.977253    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4d169a6fe7c"
	I0702 21:32:40.991431    8323 logs.go:123] Gathering logs for kube-scheduler [a8f9a2711a23] ...
	I0702 21:32:40.991441    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8f9a2711a23"
	I0702 21:32:41.002608    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:32:41.002621    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:32:41.027007    8323 logs.go:123] Gathering logs for etcd [e47ea0c109b8] ...
	I0702 21:32:41.027015    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e47ea0c109b8"
	I0702 21:32:41.040885    8323 logs.go:123] Gathering logs for etcd [1ee4e71e2cd8] ...
	I0702 21:32:41.040896    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ee4e71e2cd8"
	I0702 21:32:41.055254    8323 logs.go:123] Gathering logs for kube-scheduler [36785c6f1cd4] ...
	I0702 21:32:41.055263    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36785c6f1cd4"
	I0702 21:32:41.070157    8323 logs.go:123] Gathering logs for storage-provisioner [d6a8b9012496] ...
	I0702 21:32:41.070169    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a8b9012496"
	I0702 21:32:41.081194    8323 logs.go:123] Gathering logs for kube-apiserver [54ec470b077b] ...
	I0702 21:32:41.081205    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ec470b077b"
	I0702 21:32:41.101168    8323 logs.go:123] Gathering logs for coredns [b9a52daacedd] ...
	I0702 21:32:41.101181    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a52daacedd"
	I0702 21:32:41.114975    8323 logs.go:123] Gathering logs for kube-proxy [1befadafa2eb] ...
	I0702 21:32:41.114985    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1befadafa2eb"
	I0702 21:32:41.126258    8323 logs.go:123] Gathering logs for kube-controller-manager [9ddff714a977] ...
	I0702 21:32:41.126271    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ddff714a977"
	I0702 21:32:41.143708    8323 logs.go:123] Gathering logs for kube-controller-manager [6c69b3f6dd41] ...
	I0702 21:32:41.143719    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c69b3f6dd41"
	I0702 21:32:41.155134    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:32:41.155148    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:32:41.166682    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:32:41.166693    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:32:41.170889    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:32:41.170895    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:32:41.206612    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:32:41.206623    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:32:41.206652    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:32:41.206656    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:32:41.206662    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:32:41.206680    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:32:41.206685    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:32:51.210049    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:32:56.212700    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:32:56.213166    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:32:56.256312    8323 logs.go:276] 2 containers: [b4d169a6fe7c 54ec470b077b]
	I0702 21:32:56.256426    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:32:56.279662    8323 logs.go:276] 2 containers: [e47ea0c109b8 1ee4e71e2cd8]
	I0702 21:32:56.279765    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:32:56.293335    8323 logs.go:276] 1 containers: [b9a52daacedd]
	I0702 21:32:56.293412    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:32:56.305400    8323 logs.go:276] 2 containers: [a8f9a2711a23 36785c6f1cd4]
	I0702 21:32:56.305470    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:32:56.316171    8323 logs.go:276] 1 containers: [1befadafa2eb]
	I0702 21:32:56.316241    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:32:56.327179    8323 logs.go:276] 2 containers: [9ddff714a977 6c69b3f6dd41]
	I0702 21:32:56.327246    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:32:56.346274    8323 logs.go:276] 0 containers: []
	W0702 21:32:56.346286    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:32:56.346345    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:32:56.357108    8323 logs.go:276] 1 containers: [d6a8b9012496]
	I0702 21:32:56.357133    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:32:56.357138    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:32:56.395421    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:32:56.395515    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:32:56.396519    8323 logs.go:123] Gathering logs for kube-apiserver [b4d169a6fe7c] ...
	I0702 21:32:56.396526    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4d169a6fe7c"
	I0702 21:32:56.410615    8323 logs.go:123] Gathering logs for kube-controller-manager [6c69b3f6dd41] ...
	I0702 21:32:56.410629    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c69b3f6dd41"
	I0702 21:32:56.422907    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:32:56.422920    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:32:56.434876    8323 logs.go:123] Gathering logs for kube-scheduler [a8f9a2711a23] ...
	I0702 21:32:56.434889    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8f9a2711a23"
	I0702 21:32:56.449722    8323 logs.go:123] Gathering logs for kube-scheduler [36785c6f1cd4] ...
	I0702 21:32:56.449735    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36785c6f1cd4"
	I0702 21:32:56.468785    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:32:56.468798    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:32:56.473544    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:32:56.473552    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:32:56.508108    8323 logs.go:123] Gathering logs for coredns [b9a52daacedd] ...
	I0702 21:32:56.508121    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a52daacedd"
	I0702 21:32:56.519780    8323 logs.go:123] Gathering logs for kube-controller-manager [9ddff714a977] ...
	I0702 21:32:56.519797    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ddff714a977"
	I0702 21:32:56.537244    8323 logs.go:123] Gathering logs for storage-provisioner [d6a8b9012496] ...
	I0702 21:32:56.537255    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a8b9012496"
	I0702 21:32:56.549403    8323 logs.go:123] Gathering logs for kube-apiserver [54ec470b077b] ...
	I0702 21:32:56.549417    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ec470b077b"
	I0702 21:32:56.569990    8323 logs.go:123] Gathering logs for etcd [e47ea0c109b8] ...
	I0702 21:32:56.570001    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e47ea0c109b8"
	I0702 21:32:56.583985    8323 logs.go:123] Gathering logs for etcd [1ee4e71e2cd8] ...
	I0702 21:32:56.583994    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ee4e71e2cd8"
	I0702 21:32:56.598253    8323 logs.go:123] Gathering logs for kube-proxy [1befadafa2eb] ...
	I0702 21:32:56.598263    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1befadafa2eb"
	I0702 21:32:56.616937    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:32:56.616947    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:32:56.642932    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:32:56.642939    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:32:56.642963    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:32:56.642967    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:32:56.642983    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:32:56.642989    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:32:56.642993    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:33:06.647052    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:33:11.649669    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:33:11.649934    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:33:11.676638    8323 logs.go:276] 2 containers: [b4d169a6fe7c 54ec470b077b]
	I0702 21:33:11.676750    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:33:11.693306    8323 logs.go:276] 2 containers: [e47ea0c109b8 1ee4e71e2cd8]
	I0702 21:33:11.693397    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:33:11.706284    8323 logs.go:276] 1 containers: [b9a52daacedd]
	I0702 21:33:11.706368    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:33:11.718482    8323 logs.go:276] 2 containers: [a8f9a2711a23 36785c6f1cd4]
	I0702 21:33:11.718552    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:33:11.728843    8323 logs.go:276] 1 containers: [1befadafa2eb]
	I0702 21:33:11.728904    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:33:11.738861    8323 logs.go:276] 2 containers: [9ddff714a977 6c69b3f6dd41]
	I0702 21:33:11.738937    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:33:11.748814    8323 logs.go:276] 0 containers: []
	W0702 21:33:11.748826    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:33:11.748878    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:33:11.759102    8323 logs.go:276] 1 containers: [d6a8b9012496]
	I0702 21:33:11.759121    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:33:11.759125    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:33:11.798633    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:33:11.798727    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:33:11.799701    8323 logs.go:123] Gathering logs for kube-apiserver [b4d169a6fe7c] ...
	I0702 21:33:11.799706    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4d169a6fe7c"
	I0702 21:33:11.813484    8323 logs.go:123] Gathering logs for etcd [1ee4e71e2cd8] ...
	I0702 21:33:11.813498    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ee4e71e2cd8"
	I0702 21:33:11.827879    8323 logs.go:123] Gathering logs for kube-controller-manager [9ddff714a977] ...
	I0702 21:33:11.827891    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ddff714a977"
	I0702 21:33:11.845331    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:33:11.845354    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:33:11.850116    8323 logs.go:123] Gathering logs for kube-apiserver [54ec470b077b] ...
	I0702 21:33:11.850122    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ec470b077b"
	I0702 21:33:11.869706    8323 logs.go:123] Gathering logs for kube-scheduler [a8f9a2711a23] ...
	I0702 21:33:11.869719    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8f9a2711a23"
	I0702 21:33:11.881206    8323 logs.go:123] Gathering logs for kube-proxy [1befadafa2eb] ...
	I0702 21:33:11.881219    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1befadafa2eb"
	I0702 21:33:11.896298    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:33:11.896308    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:33:11.920313    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:33:11.920320    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:33:11.956089    8323 logs.go:123] Gathering logs for coredns [b9a52daacedd] ...
	I0702 21:33:11.956100    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a52daacedd"
	I0702 21:33:11.967708    8323 logs.go:123] Gathering logs for kube-controller-manager [6c69b3f6dd41] ...
	I0702 21:33:11.967725    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c69b3f6dd41"
	I0702 21:33:11.979684    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:33:11.979696    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:33:11.994373    8323 logs.go:123] Gathering logs for etcd [e47ea0c109b8] ...
	I0702 21:33:11.994383    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e47ea0c109b8"
	I0702 21:33:12.008436    8323 logs.go:123] Gathering logs for kube-scheduler [36785c6f1cd4] ...
	I0702 21:33:12.008449    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36785c6f1cd4"
	I0702 21:33:12.028711    8323 logs.go:123] Gathering logs for storage-provisioner [d6a8b9012496] ...
	I0702 21:33:12.028723    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a8b9012496"
	I0702 21:33:12.039846    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:33:12.039858    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:33:12.039886    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:33:12.039891    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:33:12.039895    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:33:12.039899    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:33:12.039914    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:33:22.043968    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:33:27.046796    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:33:27.047224    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:33:27.087093    8323 logs.go:276] 2 containers: [b4d169a6fe7c 54ec470b077b]
	I0702 21:33:27.087216    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:33:27.108818    8323 logs.go:276] 2 containers: [e47ea0c109b8 1ee4e71e2cd8]
	I0702 21:33:27.108930    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:33:27.123972    8323 logs.go:276] 1 containers: [b9a52daacedd]
	I0702 21:33:27.124045    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:33:27.136720    8323 logs.go:276] 2 containers: [a8f9a2711a23 36785c6f1cd4]
	I0702 21:33:27.136816    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:33:27.147606    8323 logs.go:276] 1 containers: [1befadafa2eb]
	I0702 21:33:27.147667    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:33:27.158618    8323 logs.go:276] 2 containers: [9ddff714a977 6c69b3f6dd41]
	I0702 21:33:27.158690    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:33:27.172890    8323 logs.go:276] 0 containers: []
	W0702 21:33:27.172902    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:33:27.172960    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:33:27.183983    8323 logs.go:276] 1 containers: [d6a8b9012496]
	I0702 21:33:27.184002    8323 logs.go:123] Gathering logs for etcd [e47ea0c109b8] ...
	I0702 21:33:27.184007    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e47ea0c109b8"
	I0702 21:33:27.198232    8323 logs.go:123] Gathering logs for etcd [1ee4e71e2cd8] ...
	I0702 21:33:27.198242    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ee4e71e2cd8"
	I0702 21:33:27.213238    8323 logs.go:123] Gathering logs for kube-controller-manager [9ddff714a977] ...
	I0702 21:33:27.213248    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ddff714a977"
	I0702 21:33:27.233433    8323 logs.go:123] Gathering logs for coredns [b9a52daacedd] ...
	I0702 21:33:27.233446    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a52daacedd"
	I0702 21:33:27.245359    8323 logs.go:123] Gathering logs for kube-controller-manager [6c69b3f6dd41] ...
	I0702 21:33:27.245369    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c69b3f6dd41"
	I0702 21:33:27.257027    8323 logs.go:123] Gathering logs for kube-apiserver [b4d169a6fe7c] ...
	I0702 21:33:27.257039    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4d169a6fe7c"
	I0702 21:33:27.272749    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:33:27.272759    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:33:27.298439    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:33:27.298447    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:33:27.309913    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:33:27.309924    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:33:27.349466    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:33:27.349559    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:33:27.350557    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:33:27.350565    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:33:27.355379    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:33:27.355387    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:33:27.396755    8323 logs.go:123] Gathering logs for kube-proxy [1befadafa2eb] ...
	I0702 21:33:27.396768    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1befadafa2eb"
	I0702 21:33:27.408992    8323 logs.go:123] Gathering logs for storage-provisioner [d6a8b9012496] ...
	I0702 21:33:27.409005    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a8b9012496"
	I0702 21:33:27.420306    8323 logs.go:123] Gathering logs for kube-apiserver [54ec470b077b] ...
	I0702 21:33:27.420319    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ec470b077b"
	I0702 21:33:27.448375    8323 logs.go:123] Gathering logs for kube-scheduler [a8f9a2711a23] ...
	I0702 21:33:27.448386    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8f9a2711a23"
	I0702 21:33:27.460091    8323 logs.go:123] Gathering logs for kube-scheduler [36785c6f1cd4] ...
	I0702 21:33:27.460102    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36785c6f1cd4"
	I0702 21:33:27.475476    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:33:27.475489    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:33:27.475515    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:33:27.475520    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:33:27.475523    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:33:27.475528    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:33:27.475531    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:33:37.479564    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:33:42.482150    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:33:42.482347    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:33:42.504286    8323 logs.go:276] 2 containers: [b4d169a6fe7c 54ec470b077b]
	I0702 21:33:42.504384    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:33:42.519746    8323 logs.go:276] 2 containers: [e47ea0c109b8 1ee4e71e2cd8]
	I0702 21:33:42.519823    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:33:42.532203    8323 logs.go:276] 1 containers: [b9a52daacedd]
	I0702 21:33:42.532270    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:33:42.544015    8323 logs.go:276] 2 containers: [a8f9a2711a23 36785c6f1cd4]
	I0702 21:33:42.544094    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:33:42.554343    8323 logs.go:276] 1 containers: [1befadafa2eb]
	I0702 21:33:42.554412    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:33:42.565166    8323 logs.go:276] 2 containers: [9ddff714a977 6c69b3f6dd41]
	I0702 21:33:42.565232    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:33:42.575245    8323 logs.go:276] 0 containers: []
	W0702 21:33:42.575256    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:33:42.575312    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:33:42.585602    8323 logs.go:276] 1 containers: [d6a8b9012496]
	I0702 21:33:42.585619    8323 logs.go:123] Gathering logs for kube-proxy [1befadafa2eb] ...
	I0702 21:33:42.585625    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1befadafa2eb"
	I0702 21:33:42.596891    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:33:42.596903    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:33:42.636058    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:33:42.636149    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:33:42.637096    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:33:42.637100    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:33:42.670541    8323 logs.go:123] Gathering logs for kube-apiserver [b4d169a6fe7c] ...
	I0702 21:33:42.670552    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4d169a6fe7c"
	I0702 21:33:42.684813    8323 logs.go:123] Gathering logs for kube-apiserver [54ec470b077b] ...
	I0702 21:33:42.684825    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ec470b077b"
	I0702 21:33:42.708576    8323 logs.go:123] Gathering logs for etcd [e47ea0c109b8] ...
	I0702 21:33:42.708588    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e47ea0c109b8"
	I0702 21:33:42.725323    8323 logs.go:123] Gathering logs for etcd [1ee4e71e2cd8] ...
	I0702 21:33:42.725335    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ee4e71e2cd8"
	I0702 21:33:42.739963    8323 logs.go:123] Gathering logs for coredns [b9a52daacedd] ...
	I0702 21:33:42.739972    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a52daacedd"
	I0702 21:33:42.751535    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:33:42.751546    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:33:42.763176    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:33:42.763190    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:33:42.767870    8323 logs.go:123] Gathering logs for kube-controller-manager [6c69b3f6dd41] ...
	I0702 21:33:42.767877    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c69b3f6dd41"
	I0702 21:33:42.779468    8323 logs.go:123] Gathering logs for storage-provisioner [d6a8b9012496] ...
	I0702 21:33:42.779481    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a8b9012496"
	I0702 21:33:42.790934    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:33:42.790944    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:33:42.815078    8323 logs.go:123] Gathering logs for kube-scheduler [a8f9a2711a23] ...
	I0702 21:33:42.815084    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8f9a2711a23"
	I0702 21:33:42.826483    8323 logs.go:123] Gathering logs for kube-scheduler [36785c6f1cd4] ...
	I0702 21:33:42.826494    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36785c6f1cd4"
	I0702 21:33:42.841146    8323 logs.go:123] Gathering logs for kube-controller-manager [9ddff714a977] ...
	I0702 21:33:42.841158    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ddff714a977"
	I0702 21:33:42.862723    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:33:42.862733    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:33:42.862761    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:33:42.862765    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:33:42.862769    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:33:42.862785    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:33:42.862790    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:33:52.866416    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:33:57.869341    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:33:57.869759    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:33:57.911510    8323 logs.go:276] 2 containers: [b4d169a6fe7c 54ec470b077b]
	I0702 21:33:57.911636    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:33:57.933759    8323 logs.go:276] 2 containers: [e47ea0c109b8 1ee4e71e2cd8]
	I0702 21:33:57.933860    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:33:57.949203    8323 logs.go:276] 1 containers: [b9a52daacedd]
	I0702 21:33:57.949278    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:33:57.961679    8323 logs.go:276] 2 containers: [a8f9a2711a23 36785c6f1cd4]
	I0702 21:33:57.961750    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:33:57.973089    8323 logs.go:276] 1 containers: [1befadafa2eb]
	I0702 21:33:57.973153    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:33:57.983586    8323 logs.go:276] 2 containers: [9ddff714a977 6c69b3f6dd41]
	I0702 21:33:57.983643    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:33:57.994507    8323 logs.go:276] 0 containers: []
	W0702 21:33:57.994518    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:33:57.994574    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:33:58.004871    8323 logs.go:276] 1 containers: [d6a8b9012496]
	I0702 21:33:58.004893    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:33:58.004898    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:33:58.042001    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:33:58.042093    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:33:58.043041    8323 logs.go:123] Gathering logs for kube-apiserver [54ec470b077b] ...
	I0702 21:33:58.043045    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ec470b077b"
	I0702 21:33:58.062813    8323 logs.go:123] Gathering logs for etcd [1ee4e71e2cd8] ...
	I0702 21:33:58.062825    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ee4e71e2cd8"
	I0702 21:33:58.078453    8323 logs.go:123] Gathering logs for kube-scheduler [a8f9a2711a23] ...
	I0702 21:33:58.078467    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8f9a2711a23"
	I0702 21:33:58.090479    8323 logs.go:123] Gathering logs for kube-controller-manager [6c69b3f6dd41] ...
	I0702 21:33:58.090488    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c69b3f6dd41"
	I0702 21:33:58.105150    8323 logs.go:123] Gathering logs for storage-provisioner [d6a8b9012496] ...
	I0702 21:33:58.105161    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a8b9012496"
	I0702 21:33:58.116856    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:33:58.116865    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:33:58.121399    8323 logs.go:123] Gathering logs for etcd [e47ea0c109b8] ...
	I0702 21:33:58.121407    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e47ea0c109b8"
	I0702 21:33:58.135342    8323 logs.go:123] Gathering logs for coredns [b9a52daacedd] ...
	I0702 21:33:58.135351    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a52daacedd"
	I0702 21:33:58.146894    8323 logs.go:123] Gathering logs for kube-scheduler [36785c6f1cd4] ...
	I0702 21:33:58.146904    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36785c6f1cd4"
	I0702 21:33:58.161942    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:33:58.161951    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:33:58.174975    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:33:58.174985    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:33:58.210244    8323 logs.go:123] Gathering logs for kube-apiserver [b4d169a6fe7c] ...
	I0702 21:33:58.210255    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4d169a6fe7c"
	I0702 21:33:58.224391    8323 logs.go:123] Gathering logs for kube-proxy [1befadafa2eb] ...
	I0702 21:33:58.224402    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1befadafa2eb"
	I0702 21:33:58.239641    8323 logs.go:123] Gathering logs for kube-controller-manager [9ddff714a977] ...
	I0702 21:33:58.239653    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ddff714a977"
	I0702 21:33:58.257769    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:33:58.257780    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:33:58.283648    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:33:58.283656    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:33:58.283682    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:33:58.283687    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:33:58.283690    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:33:58.283693    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:33:58.283696    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:34:08.286770    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:34:13.289223    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:34:13.289435    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:34:13.302787    8323 logs.go:276] 2 containers: [b4d169a6fe7c 54ec470b077b]
	I0702 21:34:13.302869    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:34:13.313514    8323 logs.go:276] 2 containers: [e47ea0c109b8 1ee4e71e2cd8]
	I0702 21:34:13.313584    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:34:13.328837    8323 logs.go:276] 1 containers: [b9a52daacedd]
	I0702 21:34:13.328904    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:34:13.339344    8323 logs.go:276] 2 containers: [a8f9a2711a23 36785c6f1cd4]
	I0702 21:34:13.339419    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:34:13.349493    8323 logs.go:276] 1 containers: [1befadafa2eb]
	I0702 21:34:13.349562    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:34:13.360187    8323 logs.go:276] 2 containers: [9ddff714a977 6c69b3f6dd41]
	I0702 21:34:13.360258    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:34:13.371055    8323 logs.go:276] 0 containers: []
	W0702 21:34:13.371067    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:34:13.371123    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:34:13.381755    8323 logs.go:276] 1 containers: [d6a8b9012496]
	I0702 21:34:13.381772    8323 logs.go:123] Gathering logs for etcd [1ee4e71e2cd8] ...
	I0702 21:34:13.381778    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ee4e71e2cd8"
	I0702 21:34:13.396592    8323 logs.go:123] Gathering logs for kube-scheduler [36785c6f1cd4] ...
	I0702 21:34:13.396604    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36785c6f1cd4"
	I0702 21:34:13.411497    8323 logs.go:123] Gathering logs for kube-controller-manager [9ddff714a977] ...
	I0702 21:34:13.411510    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ddff714a977"
	I0702 21:34:13.429587    8323 logs.go:123] Gathering logs for kube-controller-manager [6c69b3f6dd41] ...
	I0702 21:34:13.429597    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c69b3f6dd41"
	I0702 21:34:13.441439    8323 logs.go:123] Gathering logs for storage-provisioner [d6a8b9012496] ...
	I0702 21:34:13.441452    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a8b9012496"
	I0702 21:34:13.452927    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:34:13.452940    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:34:13.487436    8323 logs.go:123] Gathering logs for kube-apiserver [b4d169a6fe7c] ...
	I0702 21:34:13.487449    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4d169a6fe7c"
	I0702 21:34:13.501862    8323 logs.go:123] Gathering logs for kube-apiserver [54ec470b077b] ...
	I0702 21:34:13.501872    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ec470b077b"
	I0702 21:34:13.521869    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:34:13.521882    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:34:13.545266    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:34:13.545273    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:34:13.549224    8323 logs.go:123] Gathering logs for etcd [e47ea0c109b8] ...
	I0702 21:34:13.549233    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e47ea0c109b8"
	I0702 21:34:13.563005    8323 logs.go:123] Gathering logs for coredns [b9a52daacedd] ...
	I0702 21:34:13.563017    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a52daacedd"
	I0702 21:34:13.574386    8323 logs.go:123] Gathering logs for kube-scheduler [a8f9a2711a23] ...
	I0702 21:34:13.574397    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8f9a2711a23"
	I0702 21:34:13.585751    8323 logs.go:123] Gathering logs for kube-proxy [1befadafa2eb] ...
	I0702 21:34:13.585763    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1befadafa2eb"
	I0702 21:34:13.597156    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:34:13.597167    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:34:13.608918    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:34:13.608928    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:34:13.646368    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:34:13.646461    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:34:13.647413    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:34:13.647418    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:34:13.647450    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:34:13.647454    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:34:13.647472    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:34:13.647481    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:34:13.647484    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:34:23.650112    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:34:28.652674    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:34:28.653205    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:34:28.693337    8323 logs.go:276] 2 containers: [b4d169a6fe7c 54ec470b077b]
	I0702 21:34:28.693484    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:34:28.717226    8323 logs.go:276] 2 containers: [e47ea0c109b8 1ee4e71e2cd8]
	I0702 21:34:28.717323    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:34:28.732119    8323 logs.go:276] 1 containers: [b9a52daacedd]
	I0702 21:34:28.732194    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:34:28.744462    8323 logs.go:276] 2 containers: [a8f9a2711a23 36785c6f1cd4]
	I0702 21:34:28.744565    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:34:28.755024    8323 logs.go:276] 1 containers: [1befadafa2eb]
	I0702 21:34:28.755085    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:34:28.767457    8323 logs.go:276] 2 containers: [9ddff714a977 6c69b3f6dd41]
	I0702 21:34:28.767517    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:34:28.782995    8323 logs.go:276] 0 containers: []
	W0702 21:34:28.783008    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:34:28.783072    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:34:28.796754    8323 logs.go:276] 1 containers: [d6a8b9012496]
	I0702 21:34:28.796772    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:34:28.796778    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:34:28.820096    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:34:28.820106    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:34:28.859042    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:34:28.859134    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:34:28.860123    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:34:28.860128    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:34:28.894941    8323 logs.go:123] Gathering logs for kube-scheduler [a8f9a2711a23] ...
	I0702 21:34:28.894951    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8f9a2711a23"
	I0702 21:34:28.907566    8323 logs.go:123] Gathering logs for storage-provisioner [d6a8b9012496] ...
	I0702 21:34:28.907576    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a8b9012496"
	I0702 21:34:28.919206    8323 logs.go:123] Gathering logs for coredns [b9a52daacedd] ...
	I0702 21:34:28.919221    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a52daacedd"
	I0702 21:34:28.930298    8323 logs.go:123] Gathering logs for kube-scheduler [36785c6f1cd4] ...
	I0702 21:34:28.930307    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36785c6f1cd4"
	I0702 21:34:28.949247    8323 logs.go:123] Gathering logs for kube-controller-manager [9ddff714a977] ...
	I0702 21:34:28.949261    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ddff714a977"
	I0702 21:34:28.969490    8323 logs.go:123] Gathering logs for kube-apiserver [b4d169a6fe7c] ...
	I0702 21:34:28.969500    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4d169a6fe7c"
	I0702 21:34:28.983817    8323 logs.go:123] Gathering logs for etcd [1ee4e71e2cd8] ...
	I0702 21:34:28.983831    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ee4e71e2cd8"
	I0702 21:34:28.998969    8323 logs.go:123] Gathering logs for kube-controller-manager [6c69b3f6dd41] ...
	I0702 21:34:28.998979    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c69b3f6dd41"
	I0702 21:34:29.010674    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:34:29.010683    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:34:29.022566    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:34:29.022577    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:34:29.027309    8323 logs.go:123] Gathering logs for kube-apiserver [54ec470b077b] ...
	I0702 21:34:29.027315    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ec470b077b"
	I0702 21:34:29.046987    8323 logs.go:123] Gathering logs for etcd [e47ea0c109b8] ...
	I0702 21:34:29.046995    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e47ea0c109b8"
	I0702 21:34:29.061560    8323 logs.go:123] Gathering logs for kube-proxy [1befadafa2eb] ...
	I0702 21:34:29.061574    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1befadafa2eb"
	I0702 21:34:29.073647    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:34:29.073656    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:34:29.073712    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:34:29.073716    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:34:29.073720    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:34:29.073727    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:34:29.073730    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:34:39.077698    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:34:44.080247    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:34:44.080598    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:34:44.110779    8323 logs.go:276] 2 containers: [b4d169a6fe7c 54ec470b077b]
	I0702 21:34:44.110902    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:34:44.128220    8323 logs.go:276] 2 containers: [e47ea0c109b8 1ee4e71e2cd8]
	I0702 21:34:44.128311    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:34:44.141365    8323 logs.go:276] 1 containers: [b9a52daacedd]
	I0702 21:34:44.141441    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:34:44.157888    8323 logs.go:276] 2 containers: [a8f9a2711a23 36785c6f1cd4]
	I0702 21:34:44.157967    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:34:44.168011    8323 logs.go:276] 1 containers: [1befadafa2eb]
	I0702 21:34:44.168077    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:34:44.186241    8323 logs.go:276] 2 containers: [9ddff714a977 6c69b3f6dd41]
	I0702 21:34:44.186311    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:34:44.196121    8323 logs.go:276] 0 containers: []
	W0702 21:34:44.196130    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:34:44.196185    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:34:44.207017    8323 logs.go:276] 1 containers: [d6a8b9012496]
	I0702 21:34:44.207036    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:34:44.207043    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:34:44.211718    8323 logs.go:123] Gathering logs for kube-apiserver [b4d169a6fe7c] ...
	I0702 21:34:44.211727    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4d169a6fe7c"
	I0702 21:34:44.225743    8323 logs.go:123] Gathering logs for kube-apiserver [54ec470b077b] ...
	I0702 21:34:44.225755    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ec470b077b"
	I0702 21:34:44.246039    8323 logs.go:123] Gathering logs for etcd [e47ea0c109b8] ...
	I0702 21:34:44.246050    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e47ea0c109b8"
	I0702 21:34:44.259922    8323 logs.go:123] Gathering logs for etcd [1ee4e71e2cd8] ...
	I0702 21:34:44.259935    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ee4e71e2cd8"
	I0702 21:34:44.274444    8323 logs.go:123] Gathering logs for kube-scheduler [36785c6f1cd4] ...
	I0702 21:34:44.274454    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36785c6f1cd4"
	I0702 21:34:44.289475    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:34:44.289485    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:34:44.328872    8323 logs.go:123] Gathering logs for coredns [b9a52daacedd] ...
	I0702 21:34:44.328881    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a52daacedd"
	I0702 21:34:44.340608    8323 logs.go:123] Gathering logs for kube-controller-manager [6c69b3f6dd41] ...
	I0702 21:34:44.340619    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c69b3f6dd41"
	I0702 21:34:44.354362    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:34:44.354373    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:34:44.377721    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:34:44.377729    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:34:44.391143    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:34:44.391157    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:34:44.429478    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:34:44.429570    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:34:44.430566    8323 logs.go:123] Gathering logs for kube-scheduler [a8f9a2711a23] ...
	I0702 21:34:44.430571    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8f9a2711a23"
	I0702 21:34:44.446658    8323 logs.go:123] Gathering logs for kube-proxy [1befadafa2eb] ...
	I0702 21:34:44.446672    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1befadafa2eb"
	I0702 21:34:44.458146    8323 logs.go:123] Gathering logs for kube-controller-manager [9ddff714a977] ...
	I0702 21:34:44.458159    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ddff714a977"
	I0702 21:34:44.475424    8323 logs.go:123] Gathering logs for storage-provisioner [d6a8b9012496] ...
	I0702 21:34:44.475435    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a8b9012496"
	I0702 21:34:44.487122    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:34:44.487135    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:34:44.487161    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:34:44.487167    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:34:44.487171    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:34:44.487175    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:34:44.487179    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:34:54.491235    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:34:59.493700    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:34:59.494300    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:34:59.533767    8323 logs.go:276] 2 containers: [b4d169a6fe7c 54ec470b077b]
	I0702 21:34:59.533901    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:34:59.556210    8323 logs.go:276] 2 containers: [e47ea0c109b8 1ee4e71e2cd8]
	I0702 21:34:59.556324    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:34:59.571854    8323 logs.go:276] 1 containers: [b9a52daacedd]
	I0702 21:34:59.571928    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:34:59.584776    8323 logs.go:276] 2 containers: [a8f9a2711a23 36785c6f1cd4]
	I0702 21:34:59.584851    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:34:59.596098    8323 logs.go:276] 1 containers: [1befadafa2eb]
	I0702 21:34:59.596162    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:34:59.607206    8323 logs.go:276] 2 containers: [9ddff714a977 6c69b3f6dd41]
	I0702 21:34:59.607275    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:34:59.617671    8323 logs.go:276] 0 containers: []
	W0702 21:34:59.617682    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:34:59.617736    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:34:59.629469    8323 logs.go:276] 1 containers: [d6a8b9012496]
	I0702 21:34:59.629485    8323 logs.go:123] Gathering logs for kube-apiserver [54ec470b077b] ...
	I0702 21:34:59.629490    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ec470b077b"
	I0702 21:34:59.650093    8323 logs.go:123] Gathering logs for etcd [e47ea0c109b8] ...
	I0702 21:34:59.650102    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e47ea0c109b8"
	I0702 21:34:59.664720    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:34:59.664733    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:34:59.676432    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:34:59.676445    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:34:59.713810    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:34:59.713905    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:34:59.714852    8323 logs.go:123] Gathering logs for kube-proxy [1befadafa2eb] ...
	I0702 21:34:59.714858    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1befadafa2eb"
	I0702 21:34:59.726890    8323 logs.go:123] Gathering logs for kube-scheduler [36785c6f1cd4] ...
	I0702 21:34:59.726902    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36785c6f1cd4"
	I0702 21:34:59.748076    8323 logs.go:123] Gathering logs for coredns [b9a52daacedd] ...
	I0702 21:34:59.748086    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a52daacedd"
	I0702 21:34:59.760067    8323 logs.go:123] Gathering logs for kube-scheduler [a8f9a2711a23] ...
	I0702 21:34:59.760081    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8f9a2711a23"
	I0702 21:34:59.774222    8323 logs.go:123] Gathering logs for storage-provisioner [d6a8b9012496] ...
	I0702 21:34:59.774231    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a8b9012496"
	I0702 21:34:59.785314    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:34:59.785327    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:34:59.810066    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:34:59.810075    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:34:59.845822    8323 logs.go:123] Gathering logs for kube-apiserver [b4d169a6fe7c] ...
	I0702 21:34:59.845836    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4d169a6fe7c"
	I0702 21:34:59.860328    8323 logs.go:123] Gathering logs for etcd [1ee4e71e2cd8] ...
	I0702 21:34:59.860337    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ee4e71e2cd8"
	I0702 21:34:59.875454    8323 logs.go:123] Gathering logs for kube-controller-manager [9ddff714a977] ...
	I0702 21:34:59.875467    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ddff714a977"
	I0702 21:34:59.893632    8323 logs.go:123] Gathering logs for kube-controller-manager [6c69b3f6dd41] ...
	I0702 21:34:59.893650    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c69b3f6dd41"
	I0702 21:34:59.910697    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:34:59.910709    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:34:59.921921    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:34:59.921932    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:34:59.921955    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:34:59.921959    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:34:59.921963    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:34:59.921968    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:34:59.921971    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:35:09.925104    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:35:14.928108    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:35:14.928552    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:35:14.987091    8323 logs.go:276] 2 containers: [b4d169a6fe7c 54ec470b077b]
	I0702 21:35:14.987192    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:35:15.015303    8323 logs.go:276] 2 containers: [e47ea0c109b8 1ee4e71e2cd8]
	I0702 21:35:15.015385    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:35:15.026124    8323 logs.go:276] 1 containers: [b9a52daacedd]
	I0702 21:35:15.026189    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:35:15.036987    8323 logs.go:276] 2 containers: [a8f9a2711a23 36785c6f1cd4]
	I0702 21:35:15.037059    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:35:15.047621    8323 logs.go:276] 1 containers: [1befadafa2eb]
	I0702 21:35:15.047689    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:35:15.057836    8323 logs.go:276] 2 containers: [9ddff714a977 6c69b3f6dd41]
	I0702 21:35:15.057898    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:35:15.067777    8323 logs.go:276] 0 containers: []
	W0702 21:35:15.067792    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:35:15.067852    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:35:15.077985    8323 logs.go:276] 1 containers: [d6a8b9012496]
	I0702 21:35:15.078001    8323 logs.go:123] Gathering logs for kube-apiserver [b4d169a6fe7c] ...
	I0702 21:35:15.078007    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4d169a6fe7c"
	I0702 21:35:15.092060    8323 logs.go:123] Gathering logs for kube-scheduler [a8f9a2711a23] ...
	I0702 21:35:15.092074    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8f9a2711a23"
	I0702 21:35:15.103946    8323 logs.go:123] Gathering logs for kube-proxy [1befadafa2eb] ...
	I0702 21:35:15.103957    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1befadafa2eb"
	I0702 21:35:15.115581    8323 logs.go:123] Gathering logs for kube-controller-manager [6c69b3f6dd41] ...
	I0702 21:35:15.115590    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c69b3f6dd41"
	I0702 21:35:15.127587    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:35:15.127597    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:35:15.163036    8323 logs.go:123] Gathering logs for etcd [1ee4e71e2cd8] ...
	I0702 21:35:15.163047    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ee4e71e2cd8"
	I0702 21:35:15.177894    8323 logs.go:123] Gathering logs for coredns [b9a52daacedd] ...
	I0702 21:35:15.177905    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a52daacedd"
	I0702 21:35:15.190533    8323 logs.go:123] Gathering logs for kube-controller-manager [9ddff714a977] ...
	I0702 21:35:15.190544    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ddff714a977"
	I0702 21:35:15.207870    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:35:15.207880    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:35:15.231262    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:35:15.231269    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:35:15.243800    8323 logs.go:123] Gathering logs for kube-apiserver [54ec470b077b] ...
	I0702 21:35:15.243810    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ec470b077b"
	I0702 21:35:15.264443    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:35:15.264454    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:35:15.268684    8323 logs.go:123] Gathering logs for etcd [e47ea0c109b8] ...
	I0702 21:35:15.268693    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e47ea0c109b8"
	I0702 21:35:15.282412    8323 logs.go:123] Gathering logs for kube-scheduler [36785c6f1cd4] ...
	I0702 21:35:15.282423    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36785c6f1cd4"
	I0702 21:35:15.297846    8323 logs.go:123] Gathering logs for storage-provisioner [d6a8b9012496] ...
	I0702 21:35:15.297856    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a8b9012496"
	I0702 21:35:15.309681    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:35:15.309691    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:35:15.349277    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:35:15.349367    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:35:15.350350    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:35:15.350359    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:35:15.350383    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:35:15.350386    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:35:15.350390    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:35:15.350393    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:35:15.350396    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:35:25.353184    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:35:30.355147    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:35:30.355185    8323 kubeadm.go:591] duration metric: took 4m7.326010833s to restartPrimaryControlPlane
	W0702 21:35:30.355295    8323 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0702 21:35:30.355314    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0702 21:35:31.343423    8323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0702 21:35:31.348357    8323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0702 21:35:31.351274    8323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0702 21:35:31.354482    8323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0702 21:35:31.354489    8323 kubeadm.go:156] found existing configuration files:
	
	I0702 21:35:31.354507    8323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/admin.conf
	I0702 21:35:31.356992    8323 kubeadm.go:162] "https://control-plane.minikube.internal:51204" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0702 21:35:31.357015    8323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0702 21:35:31.359983    8323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/kubelet.conf
	I0702 21:35:31.363383    8323 kubeadm.go:162] "https://control-plane.minikube.internal:51204" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0702 21:35:31.363403    8323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0702 21:35:31.366502    8323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/controller-manager.conf
	I0702 21:35:31.369191    8323 kubeadm.go:162] "https://control-plane.minikube.internal:51204" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0702 21:35:31.369210    8323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0702 21:35:31.371931    8323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/scheduler.conf
	I0702 21:35:31.375185    8323 kubeadm.go:162] "https://control-plane.minikube.internal:51204" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51204 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0702 21:35:31.375202    8323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0702 21:35:31.378615    8323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0702 21:35:31.395519    8323 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0702 21:35:31.395548    8323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0702 21:35:31.442867    8323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0702 21:35:31.442939    8323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0702 21:35:31.442986    8323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0702 21:35:31.496878    8323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0702 21:35:31.506080    8323 out.go:204]   - Generating certificates and keys ...
	I0702 21:35:31.506113    8323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0702 21:35:31.506147    8323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0702 21:35:31.506184    8323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0702 21:35:31.506214    8323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0702 21:35:31.506245    8323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0702 21:35:31.506285    8323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0702 21:35:31.506314    8323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0702 21:35:31.506346    8323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0702 21:35:31.506382    8323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0702 21:35:31.506419    8323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0702 21:35:31.506439    8323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0702 21:35:31.506478    8323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0702 21:35:31.551413    8323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0702 21:35:31.667881    8323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0702 21:35:31.698840    8323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0702 21:35:31.971203    8323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0702 21:35:31.999555    8323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0702 21:35:31.999902    8323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0702 21:35:31.999950    8323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0702 21:35:32.086899    8323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0702 21:35:32.091060    8323 out.go:204]   - Booting up control plane ...
	I0702 21:35:32.091115    8323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0702 21:35:32.091151    8323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0702 21:35:32.091241    8323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0702 21:35:32.091281    8323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0702 21:35:32.091359    8323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0702 21:35:36.595682    8323 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.508035 seconds
	I0702 21:35:36.595951    8323 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0702 21:35:36.615338    8323 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0702 21:35:37.130749    8323 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0702 21:35:37.130865    8323 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-908000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0702 21:35:37.636135    8323 kubeadm.go:309] [bootstrap-token] Using token: pkjmrv.cek479antku93z2w
	I0702 21:35:37.654299    8323 out.go:204]   - Configuring RBAC rules ...
	I0702 21:35:37.654367    8323 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0702 21:35:37.654432    8323 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0702 21:35:37.661242    8323 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0702 21:35:37.662348    8323 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0702 21:35:37.664114    8323 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0702 21:35:37.665254    8323 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0702 21:35:37.670033    8323 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0702 21:35:37.828946    8323 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0702 21:35:38.040534    8323 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0702 21:35:38.040988    8323 kubeadm.go:309] 
	I0702 21:35:38.041022    8323 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0702 21:35:38.041026    8323 kubeadm.go:309] 
	I0702 21:35:38.041069    8323 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0702 21:35:38.041080    8323 kubeadm.go:309] 
	I0702 21:35:38.041114    8323 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0702 21:35:38.041150    8323 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0702 21:35:38.041181    8323 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0702 21:35:38.041186    8323 kubeadm.go:309] 
	I0702 21:35:38.041214    8323 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0702 21:35:38.041217    8323 kubeadm.go:309] 
	I0702 21:35:38.041257    8323 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0702 21:35:38.041264    8323 kubeadm.go:309] 
	I0702 21:35:38.041292    8323 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0702 21:35:38.041328    8323 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0702 21:35:38.041366    8323 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0702 21:35:38.041372    8323 kubeadm.go:309] 
	I0702 21:35:38.041415    8323 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0702 21:35:38.041463    8323 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0702 21:35:38.041470    8323 kubeadm.go:309] 
	I0702 21:35:38.041520    8323 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token pkjmrv.cek479antku93z2w \
	I0702 21:35:38.041578    8323 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4ab8010a117a4bd6be25efd6459f56a0fb2de6896b05d4e484fc24c43035dfd9 \
	I0702 21:35:38.041593    8323 kubeadm.go:309] 	--control-plane 
	I0702 21:35:38.041595    8323 kubeadm.go:309] 
	I0702 21:35:38.041641    8323 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0702 21:35:38.041658    8323 kubeadm.go:309] 
	I0702 21:35:38.041711    8323 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token pkjmrv.cek479antku93z2w \
	I0702 21:35:38.041772    8323 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4ab8010a117a4bd6be25efd6459f56a0fb2de6896b05d4e484fc24c43035dfd9 
	I0702 21:35:38.041833    8323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0702 21:35:38.041877    8323 cni.go:84] Creating CNI manager for ""
	I0702 21:35:38.041886    8323 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:35:38.052041    8323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0702 21:35:38.056142    8323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0702 21:35:38.059706    8323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0702 21:35:38.065400    8323 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0702 21:35:38.065448    8323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0702 21:35:38.065468    8323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-908000 minikube.k8s.io/updated_at=2024_07_02T21_35_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6e34d4fd348f73f0f8af294cc2737aeb8da39e8d minikube.k8s.io/name=running-upgrade-908000 minikube.k8s.io/primary=true
	I0702 21:35:38.118634    8323 kubeadm.go:1107] duration metric: took 53.224291ms to wait for elevateKubeSystemPrivileges
	I0702 21:35:38.118657    8323 ops.go:34] apiserver oom_adj: -16
	W0702 21:35:38.118672    8323 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0702 21:35:38.118676    8323 kubeadm.go:393] duration metric: took 4m15.104192417s to StartCluster
	I0702 21:35:38.118686    8323 settings.go:142] acquiring lock: {Name:mkd9027dadc8b50e6398a16ff695ba9d1e13b355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:35:38.118844    8323 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:35:38.119151    8323 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/kubeconfig: {Name:mk27cb7c8451cb331bdc98ce6310b0b3aba92b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:35:38.119346    8323 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:35:38.119364    8323 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0702 21:35:38.119405    8323 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-908000"
	I0702 21:35:38.119416    8323 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-908000"
	I0702 21:35:38.119415    8323 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-908000"
	W0702 21:35:38.119419    8323 addons.go:243] addon storage-provisioner should already be in state true
	I0702 21:35:38.119434    8323 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-908000"
	I0702 21:35:38.119440    8323 host.go:66] Checking if "running-upgrade-908000" exists ...
	I0702 21:35:38.119445    8323 config.go:182] Loaded profile config "running-upgrade-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:35:38.120303    8323 kapi.go:59] client config for running-upgrade-908000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/client.key", CAFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103de5a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0702 21:35:38.120435    8323 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-908000"
	W0702 21:35:38.120439    8323 addons.go:243] addon default-storageclass should already be in state true
	I0702 21:35:38.120445    8323 host.go:66] Checking if "running-upgrade-908000" exists ...
	I0702 21:35:38.123142    8323 out.go:177] * Verifying Kubernetes components...
	I0702 21:35:38.123452    8323 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0702 21:35:38.126498    8323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0702 21:35:38.126508    8323 sshutil.go:53] new ssh client: &{IP:localhost Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	I0702 21:35:38.129095    8323 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:35:38.133160    8323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:35:38.137023    8323 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0702 21:35:38.137030    8323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0702 21:35:38.137035    8323 sshutil.go:53] new ssh client: &{IP:localhost Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	I0702 21:35:38.223159    8323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0702 21:35:38.228612    8323 api_server.go:52] waiting for apiserver process to appear ...
	I0702 21:35:38.228649    8323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0702 21:35:38.232518    8323 api_server.go:72] duration metric: took 113.163042ms to wait for apiserver process to appear ...
	I0702 21:35:38.232525    8323 api_server.go:88] waiting for apiserver healthz status ...
	I0702 21:35:38.232532    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:35:38.237862    8323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0702 21:35:38.301394    8323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0702 21:35:43.234696    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:35:43.234773    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:35:48.235505    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:35:48.235573    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:35:53.236231    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:35:53.236352    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:35:58.237102    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:35:58.237149    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:36:03.238093    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:36:03.238165    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:36:08.239421    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:36:08.239447    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0702 21:36:08.574483    8323 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0702 21:36:08.582588    8323 out.go:177] * Enabled addons: storage-provisioner
	I0702 21:36:08.589473    8323 addons.go:510] duration metric: took 30.470451959s for enable addons: enabled=[storage-provisioner]
	I0702 21:36:13.240928    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:36:13.240963    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:36:18.241385    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:36:18.241405    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:36:23.242621    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:36:23.242646    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:36:28.244753    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:36:28.244780    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:36:33.246891    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:36:33.246917    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:36:38.249060    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:36:38.249289    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:36:38.274474    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:36:38.274575    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:36:38.289362    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:36:38.289441    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:36:38.301493    8323 logs.go:276] 2 containers: [61261c440964 0033d4e81390]
	I0702 21:36:38.301574    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:36:38.325052    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:36:38.325133    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:36:38.342552    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:36:38.342629    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:36:38.356712    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:36:38.356783    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:36:38.367102    8323 logs.go:276] 0 containers: []
	W0702 21:36:38.367115    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:36:38.367182    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:36:38.377722    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:36:38.377737    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:36:38.377743    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:36:38.399801    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:36:38.399816    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:36:38.404761    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:36:38.404769    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:36:38.441657    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:36:38.441669    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:36:38.453877    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:36:38.453888    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:36:38.465676    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:36:38.465686    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:36:38.487816    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:36:38.487829    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:36:38.500830    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:36:38.500843    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:36:38.512896    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:36:38.512909    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:36:38.537477    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:36:38.537486    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:36:38.555925    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:36:38.556018    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:36:38.573305    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:36:38.573315    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:36:38.589069    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:36:38.589082    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:36:38.604048    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:36:38.604063    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:36:38.617412    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:36:38.617427    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:36:38.617463    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:36:38.617470    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:36:38.617478    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:36:38.617524    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:36:38.617570    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:36:48.621620    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:36:53.623202    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:36:53.623559    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:36:53.664870    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:36:53.665006    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:36:53.686240    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:36:53.686357    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:36:53.701444    8323 logs.go:276] 2 containers: [61261c440964 0033d4e81390]
	I0702 21:36:53.701517    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:36:53.714058    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:36:53.714123    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:36:53.724964    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:36:53.725037    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:36:53.738888    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:36:53.738954    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:36:53.749772    8323 logs.go:276] 0 containers: []
	W0702 21:36:53.749784    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:36:53.749845    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:36:53.761043    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:36:53.761057    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:36:53.761061    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:36:53.776847    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:36:53.776860    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:36:53.781273    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:36:53.781282    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:36:53.821816    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:36:53.821830    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:36:53.836334    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:36:53.836346    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:36:53.859625    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:36:53.859636    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:36:53.875069    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:36:53.875083    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:36:53.893620    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:36:53.893634    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:36:53.905383    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:36:53.905394    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:36:53.928769    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:36:53.928784    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:36:53.947208    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:36:53.947301    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:36:53.963620    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:36:53.963625    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:36:53.977318    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:36:53.977331    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:36:53.989031    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:36:53.989040    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:36:54.000392    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:36:54.000402    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:36:54.000428    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:36:54.000433    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:36:54.000436    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:36:54.000439    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:36:54.000442    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:37:04.004107    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:09.005947    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:09.006415    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:37:09.045918    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:37:09.046043    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:37:09.068073    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:37:09.068189    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:37:09.084235    8323 logs.go:276] 2 containers: [61261c440964 0033d4e81390]
	I0702 21:37:09.084317    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:37:09.096792    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:37:09.096863    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:37:09.107289    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:37:09.107347    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:37:09.117573    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:37:09.117641    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:37:09.127747    8323 logs.go:276] 0 containers: []
	W0702 21:37:09.127762    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:37:09.127813    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:37:09.138258    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:37:09.138272    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:37:09.138279    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:37:09.153495    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:37:09.153508    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:37:09.165098    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:37:09.165109    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:37:09.169546    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:37:09.169554    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:37:09.189213    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:37:09.189224    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:37:09.202944    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:37:09.202957    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:37:09.214445    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:37:09.214459    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:37:09.235371    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:37:09.235381    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:37:09.261269    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:37:09.261277    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:37:09.272562    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:37:09.272575    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:37:09.290593    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:09.290684    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:09.306893    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:37:09.306900    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:37:09.342004    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:37:09.342016    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:37:09.353411    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:37:09.353420    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:37:09.373815    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:09.373823    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:37:09.373852    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:37:09.373862    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:09.373866    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:09.373872    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:09.373874    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:37:19.375872    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:24.378145    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:24.378529    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:37:24.413415    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:37:24.413545    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:37:24.432063    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:37:24.432161    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:37:24.446478    8323 logs.go:276] 2 containers: [61261c440964 0033d4e81390]
	I0702 21:37:24.446554    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:37:24.459790    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:37:24.459862    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:37:24.470340    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:37:24.470405    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:37:24.481568    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:37:24.481633    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:37:24.493422    8323 logs.go:276] 0 containers: []
	W0702 21:37:24.493434    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:37:24.493494    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:37:24.504433    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:37:24.504449    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:37:24.504454    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:37:24.522486    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:37:24.522496    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:37:24.537986    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:37:24.537996    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:37:24.555381    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:37:24.555391    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:37:24.567127    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:37:24.567140    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:37:24.592631    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:37:24.592646    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:37:24.605566    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:37:24.605578    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:37:24.618747    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:37:24.618758    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:37:24.635543    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:24.635634    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:24.652126    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:37:24.652134    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:37:24.657513    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:37:24.657524    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:37:24.692165    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:37:24.692179    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:37:24.706857    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:37:24.706868    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:37:24.726279    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:37:24.726290    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:37:24.737948    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:24.737963    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:37:24.737988    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:37:24.737993    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:24.737996    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:24.738001    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:24.738004    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:37:34.740838    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:39.742962    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:39.743119    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:37:39.766524    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:37:39.766606    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:37:39.779513    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:37:39.779585    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:37:39.793943    8323 logs.go:276] 3 containers: [ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:37:39.794017    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:37:39.806349    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:37:39.806429    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:37:39.816817    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:37:39.816889    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:37:39.827237    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:37:39.827303    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:37:39.837437    8323 logs.go:276] 0 containers: []
	W0702 21:37:39.837447    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:37:39.837505    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:37:39.858044    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:37:39.858063    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:37:39.858068    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:37:39.874762    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:39.874853    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:39.891082    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:37:39.891089    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:37:39.905473    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:37:39.905483    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:37:39.924001    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:37:39.924012    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:37:39.928527    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:37:39.928536    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:37:39.939666    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:37:39.939678    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:37:39.951995    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:37:39.952007    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:37:39.964432    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:37:39.964443    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:37:39.977009    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:37:39.977019    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:37:40.011777    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:37:40.011789    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:37:40.029240    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:37:40.029254    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:37:40.043328    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:37:40.043339    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:37:40.068741    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:37:40.068751    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:37:40.079807    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:37:40.079817    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:37:40.100944    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:40.100953    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:37:40.100980    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:37:40.100984    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:40.100988    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:40.100992    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:40.100994    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:37:50.103888    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:55.106011    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:55.106193    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:37:55.123904    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:37:55.123984    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:37:55.136954    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:37:55.137028    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:37:55.147952    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:37:55.148017    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:37:55.158663    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:37:55.158720    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:37:55.168713    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:37:55.168780    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:37:55.179672    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:37:55.179740    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:37:55.192779    8323 logs.go:276] 0 containers: []
	W0702 21:37:55.192789    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:37:55.192841    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:37:55.203723    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:37:55.203742    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:37:55.203747    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:37:55.220107    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:37:55.220118    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:37:55.239162    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:37:55.239174    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:37:55.274339    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:37:55.274349    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:37:55.286321    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:37:55.286332    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:37:55.309868    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:37:55.309876    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:37:55.326588    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:55.326685    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:55.343018    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:37:55.343024    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:37:55.348003    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:37:55.348011    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:37:55.362743    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:37:55.362755    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:37:55.380638    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:37:55.380648    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:37:55.391891    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:37:55.391904    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:37:55.413004    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:37:55.413015    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:37:55.428377    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:37:55.428392    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:37:55.439971    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:37:55.439981    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:37:55.458044    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:37:55.458054    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:37:55.469860    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:55.469873    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:37:55.469905    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:37:55.469910    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:55.469914    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:55.469920    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:55.469923    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:38:05.473581    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:10.475956    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:10.476328    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:10.507654    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:38:10.507786    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:10.525567    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:38:10.525665    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:10.538988    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:38:10.539057    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:10.550693    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:38:10.550762    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:10.561491    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:38:10.561561    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:10.572256    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:38:10.572326    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:10.582767    8323 logs.go:276] 0 containers: []
	W0702 21:38:10.582777    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:10.582832    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:10.593562    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:38:10.593578    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:38:10.593583    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:38:10.605418    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:38:10.605429    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:38:10.620656    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:38:10.620667    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:38:10.632287    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:38:10.632297    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:38:10.644063    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:10.644073    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:10.648825    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:10.648834    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:10.683425    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:38:10.683436    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:38:10.698617    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:38:10.698630    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:38:10.710925    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:10.710935    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:10.734539    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:10.734549    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:38:10.750571    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:10.750662    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:10.767153    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:38:10.767157    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:38:10.778044    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:38:10.778056    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:38:10.789421    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:38:10.789434    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:38:10.808579    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:38:10.808589    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:38:10.826310    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:38:10.826320    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:10.838149    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:10.838159    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:38:10.838184    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:38:10.838189    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:10.838193    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:10.838197    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:10.838200    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:38:20.842122    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:25.844511    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:25.844930    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:25.889646    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:38:25.889788    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:25.910644    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:38:25.910743    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:25.937119    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:38:25.937191    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:25.948488    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:38:25.948549    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:25.960079    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:38:25.960153    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:25.972023    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:38:25.972090    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:25.982714    8323 logs.go:276] 0 containers: []
	W0702 21:38:25.982725    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:25.982783    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:25.995148    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:38:25.995165    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:38:25.995171    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:38:26.033307    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:38:26.033318    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:38:26.045709    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:38:26.045723    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:26.058031    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:26.058042    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:38:26.074777    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:26.074871    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:26.091601    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:26.091612    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:26.096206    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:38:26.096213    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:38:26.110533    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:38:26.110544    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:38:26.122545    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:38:26.122554    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:38:26.138004    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:38:26.138015    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:38:26.155779    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:26.155790    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:26.189648    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:38:26.189661    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:38:26.206018    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:38:26.206027    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:38:26.219924    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:38:26.219937    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:38:26.232399    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:38:26.232412    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:38:26.244320    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:26.244329    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:26.268263    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:26.268274    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:38:26.268303    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:38:26.268309    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:26.268313    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:26.268317    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:26.268319    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:38:36.272203    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:41.274279    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:41.274394    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:41.286747    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:38:41.286821    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:41.297005    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:38:41.297075    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:41.307981    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:38:41.308053    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:41.318591    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:38:41.318668    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:41.328762    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:38:41.328824    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:41.348351    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:38:41.348409    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:41.359023    8323 logs.go:276] 0 containers: []
	W0702 21:38:41.359037    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:41.359099    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:41.369312    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:38:41.369331    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:41.369336    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:38:41.387328    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:41.387419    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:41.404157    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:38:41.404164    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:38:41.415409    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:38:41.415418    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:38:41.433699    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:41.433708    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:41.458666    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:38:41.458672    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:38:41.471414    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:38:41.471425    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:38:41.485117    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:41.485129    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:41.526170    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:38:41.526187    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:38:41.542021    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:41.542031    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:41.547006    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:38:41.547015    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:38:41.565296    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:38:41.565306    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:38:41.579534    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:38:41.579546    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:38:41.591713    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:38:41.591722    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:38:41.603506    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:38:41.603517    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:38:41.615543    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:38:41.615553    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:41.627591    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:41.627601    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:38:41.627626    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:38:41.627632    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:41.627638    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:41.627643    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:41.627647    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:38:51.630275    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:56.632392    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:56.632588    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:56.651077    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:38:56.651162    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:56.664831    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:38:56.664910    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:56.676616    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:38:56.676682    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:56.686955    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:38:56.687026    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:56.697321    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:38:56.697387    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:56.707496    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:38:56.707569    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:56.717266    8323 logs.go:276] 0 containers: []
	W0702 21:38:56.717278    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:56.717333    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:56.728433    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:38:56.728452    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:38:56.728458    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:38:56.743991    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:56.744020    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:56.748387    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:56.748395    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:56.771431    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:38:56.771438    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:38:56.782577    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:56.782587    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:56.817489    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:38:56.817500    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:38:56.835696    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:38:56.835707    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:38:56.851500    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:38:56.851512    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:38:56.863450    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:38:56.863462    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:38:56.875662    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:38:56.875675    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:38:56.886916    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:56.886927    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:38:56.903748    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:56.903840    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:56.920608    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:38:56.920613    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:38:56.932289    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:38:56.932300    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:38:56.956129    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:38:56.956139    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:56.972783    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:38:56.972794    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:38:56.984826    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:56.984837    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:38:56.984866    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:38:56.984872    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:56.984876    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:56.984881    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:56.984883    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:39:06.986563    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:11.989268    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:11.989437    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:12.007559    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:39:12.007652    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:12.027965    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:39:12.028036    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:12.039250    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:39:12.039321    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:12.049642    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:39:12.049705    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:12.061601    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:39:12.061665    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:12.076639    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:39:12.076701    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:12.091058    8323 logs.go:276] 0 containers: []
	W0702 21:39:12.091072    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:12.091127    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:12.104835    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:39:12.104851    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:39:12.104857    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:39:12.125517    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:39:12.125530    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:12.137620    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:39:12.137633    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:39:12.153027    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:12.153039    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:12.176572    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:12.176587    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:39:12.193033    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:39:12.193123    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:39:12.209504    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:39:12.209512    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:39:12.220785    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:39:12.220796    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:39:12.236400    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:39:12.236410    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:39:12.248357    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:39:12.248371    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:39:12.260386    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:39:12.260400    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:39:12.273781    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:12.273792    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:12.278917    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:12.278926    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:12.313548    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:39:12.313563    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:39:12.327323    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:39:12.327333    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:39:12.339086    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:39:12.339098    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:39:12.350956    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:39:12.350967    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:39:12.350995    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:39:12.351000    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:39:12.351003    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:39:12.351007    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:39:12.351010    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:39:22.355003    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:27.357644    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:27.358050    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:27.395521    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:39:27.395659    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:27.417625    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:39:27.417735    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:27.432507    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:39:27.432588    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:27.453851    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:39:27.453929    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:27.472926    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:39:27.473003    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:27.489411    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:39:27.489488    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:27.499712    8323 logs.go:276] 0 containers: []
	W0702 21:39:27.499725    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:27.499785    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:27.510443    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:39:27.510462    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:27.510467    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:39:27.527326    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:39:27.527418    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:39:27.543768    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:39:27.543775    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:39:27.555920    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:39:27.555934    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:39:27.577906    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:39:27.577917    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:27.589975    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:39:27.589989    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:39:27.603589    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:39:27.603602    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:39:27.615187    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:39:27.615198    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:39:27.628099    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:27.628110    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:27.663094    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:39:27.663105    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:39:27.675040    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:39:27.675052    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:39:27.686245    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:27.686258    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:27.690648    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:39:27.690657    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:39:27.705469    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:39:27.705479    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:39:27.720941    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:39:27.720950    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:39:27.734220    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:27.734229    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:27.757062    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:39:27.757073    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:39:27.757103    8323 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0702 21:39:27.757107    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:39:27.757125    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	  Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:39:27.757138    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:39:27.757143    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:39:37.760257    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:42.762922    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:42.768591    8323 out.go:177] 
	W0702 21:39:42.772567    8323 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0702 21:39:42.772585    8323 out.go:239] * 
	* 
	W0702 21:39:42.773778    8323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:39:42.783522    8323 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-908000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-02 21:39:42.870417 -0700 PDT m=+1271.588148126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-908000 -n running-upgrade-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-908000 -n running-upgrade-908000: exit status 2 (15.6850965s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-908000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cilium-967000                      | cilium-967000             | jenkins | v1.33.1 | 02 Jul 24 21:31 PDT | 02 Jul 24 21:31 PDT |
	| start   | -p force-systemd-env-973000           | force-systemd-env-973000  | jenkins | v1.33.1 | 02 Jul 24 21:31 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-973000              | force-systemd-env-973000  | jenkins | v1.33.1 | 02 Jul 24 21:31 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-973000           | force-systemd-env-973000  | jenkins | v1.33.1 | 02 Jul 24 21:31 PDT | 02 Jul 24 21:31 PDT |
	| start   | -p force-systemd-flag-237000          | force-systemd-flag-237000 | jenkins | v1.33.1 | 02 Jul 24 21:31 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-237000             | force-systemd-flag-237000 | jenkins | v1.33.1 | 02 Jul 24 21:32 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-237000          | force-systemd-flag-237000 | jenkins | v1.33.1 | 02 Jul 24 21:32 PDT | 02 Jul 24 21:32 PDT |
	| start   | -p docker-flags-414000                | docker-flags-414000       | jenkins | v1.33.1 | 02 Jul 24 21:32 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-414000 ssh               | docker-flags-414000       | jenkins | v1.33.1 | 02 Jul 24 21:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-414000 ssh               | docker-flags-414000       | jenkins | v1.33.1 | 02 Jul 24 21:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-414000                | docker-flags-414000       | jenkins | v1.33.1 | 02 Jul 24 21:32 PDT | 02 Jul 24 21:32 PDT |
	| start   | -p cert-expiration-826000             | cert-expiration-826000    | jenkins | v1.33.1 | 02 Jul 24 21:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-826000             | cert-expiration-826000    | jenkins | v1.33.1 | 02 Jul 24 21:35 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-826000             | cert-expiration-826000    | jenkins | v1.33.1 | 02 Jul 24 21:35 PDT | 02 Jul 24 21:35 PDT |
	| start   | -p cert-options-775000                | cert-options-775000       | jenkins | v1.33.1 | 02 Jul 24 21:35 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-775000 ssh               | cert-options-775000       | jenkins | v1.33.1 | 02 Jul 24 21:35 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-775000 -- sudo        | cert-options-775000       | jenkins | v1.33.1 | 02 Jul 24 21:35 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-775000                | cert-options-775000       | jenkins | v1.33.1 | 02 Jul 24 21:35 PDT | 02 Jul 24 21:35 PDT |
	| start   | -p kubernetes-upgrade-521000          | kubernetes-upgrade-521000 | jenkins | v1.33.1 | 02 Jul 24 21:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-521000          | kubernetes-upgrade-521000 | jenkins | v1.33.1 | 02 Jul 24 21:35 PDT | 02 Jul 24 21:35 PDT |
	| start   | -p kubernetes-upgrade-521000          | kubernetes-upgrade-521000 | jenkins | v1.33.1 | 02 Jul 24 21:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-521000          | kubernetes-upgrade-521000 | jenkins | v1.33.1 | 02 Jul 24 21:35 PDT | 02 Jul 24 21:35 PDT |
	| start   | -p stopped-upgrade-896000             | minikube                  | jenkins | v1.26.0 | 02 Jul 24 21:35 PDT | 02 Jul 24 21:36 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-896000 stop           | minikube                  | jenkins | v1.26.0 | 02 Jul 24 21:36 PDT | 02 Jul 24 21:36 PDT |
	| start   | -p stopped-upgrade-896000             | stopped-upgrade-896000    | jenkins | v1.33.1 | 02 Jul 24 21:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/02 21:36:49
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0702 21:36:49.594256    8914 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:36:49.594451    8914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:36:49.594456    8914 out.go:304] Setting ErrFile to fd 2...
	I0702 21:36:49.594459    8914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:36:49.594600    8914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:36:49.595759    8914 out.go:298] Setting JSON to false
	I0702 21:36:49.615010    8914 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5778,"bootTime":1719975631,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:36:49.615078    8914 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:36:49.619244    8914 out.go:177] * [stopped-upgrade-896000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:36:49.627121    8914 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:36:49.627173    8914 notify.go:220] Checking for updates...
	I0702 21:36:49.634105    8914 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:36:49.637163    8914 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:36:49.640193    8914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:36:49.643085    8914 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:36:49.646147    8914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:36:49.649495    8914 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:36:49.653036    8914 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0702 21:36:49.657114    8914 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:36:49.661096    8914 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:36:49.669034    8914 start.go:297] selected driver: qemu2
	I0702 21:36:49.669041    8914 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0702 21:36:49.669085    8914 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:36:49.671612    8914 cni.go:84] Creating CNI manager for ""
	I0702 21:36:49.671629    8914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:36:49.671652    8914 start.go:340] cluster config:
	{Name:stopped-upgrade-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0702 21:36:49.671700    8914 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:36:49.680098    8914 out.go:177] * Starting "stopped-upgrade-896000" primary control-plane node in "stopped-upgrade-896000" cluster
	I0702 21:36:49.684116    8914 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0702 21:36:49.684131    8914 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0702 21:36:49.684138    8914 cache.go:56] Caching tarball of preloaded images
	I0702 21:36:49.684198    8914 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:36:49.684203    8914 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0702 21:36:49.684248    8914 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/config.json ...
	I0702 21:36:49.684564    8914 start.go:360] acquireMachinesLock for stopped-upgrade-896000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:36:49.684596    8914 start.go:364] duration metric: took 26.166µs to acquireMachinesLock for "stopped-upgrade-896000"
	I0702 21:36:49.684606    8914 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:36:49.684611    8914 fix.go:54] fixHost starting: 
	I0702 21:36:49.684718    8914 fix.go:112] recreateIfNeeded on stopped-upgrade-896000: state=Stopped err=<nil>
	W0702 21:36:49.684725    8914 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:36:49.688116    8914 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-896000" ...
	I0702 21:36:48.621620    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:36:49.696149    8914 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51457-:22,hostfwd=tcp::51458-:2376,hostname=stopped-upgrade-896000 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/disk.qcow2
	I0702 21:36:49.741454    8914 main.go:141] libmachine: STDOUT: 
	I0702 21:36:49.741485    8914 main.go:141] libmachine: STDERR: 
	I0702 21:36:49.741496    8914 main.go:141] libmachine: Waiting for VM to start (ssh -p 51457 docker@127.0.0.1)...
	I0702 21:36:53.623202    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:36:53.623559    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:36:53.664870    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:36:53.665006    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:36:53.686240    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:36:53.686357    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:36:53.701444    8323 logs.go:276] 2 containers: [61261c440964 0033d4e81390]
	I0702 21:36:53.701517    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:36:53.714058    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:36:53.714123    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:36:53.724964    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:36:53.725037    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:36:53.738888    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:36:53.738954    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:36:53.749772    8323 logs.go:276] 0 containers: []
	W0702 21:36:53.749784    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:36:53.749845    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:36:53.761043    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:36:53.761057    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:36:53.761061    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:36:53.776847    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:36:53.776860    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:36:53.781273    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:36:53.781282    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:36:53.821816    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:36:53.821830    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:36:53.836334    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:36:53.836346    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:36:53.859625    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:36:53.859636    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:36:53.875069    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:36:53.875083    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:36:53.893620    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:36:53.893634    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:36:53.905383    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:36:53.905394    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:36:53.928769    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:36:53.928784    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:36:53.947208    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:36:53.947301    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:36:53.963620    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:36:53.963625    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:36:53.977318    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:36:53.977331    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:36:53.989031    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:36:53.989040    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:36:54.000392    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:36:54.000402    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:36:54.000428    8323 out.go:239] X Problems detected in kubelet:
	W0702 21:36:54.000433    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:36:54.000436    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:36:54.000439    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:36:54.000442    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:37:04.004107    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:09.365629    8914 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/config.json ...
	I0702 21:37:09.365840    8914 machine.go:94] provisionDockerMachine start ...
	I0702 21:37:09.365891    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:09.366037    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:09.366044    8914 main.go:141] libmachine: About to run SSH command:
	hostname
	I0702 21:37:09.416432    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0702 21:37:09.416448    8914 buildroot.go:166] provisioning hostname "stopped-upgrade-896000"
	I0702 21:37:09.416503    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:09.416633    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:09.416638    8914 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-896000 && echo "stopped-upgrade-896000" | sudo tee /etc/hostname
	I0702 21:37:09.468771    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-896000
	
	I0702 21:37:09.468820    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:09.468934    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:09.468943    8914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-896000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-896000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-896000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0702 21:37:09.522928    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0702 21:37:09.522937    8914 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19184-6175/.minikube CaCertPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19184-6175/.minikube}
	I0702 21:37:09.522947    8914 buildroot.go:174] setting up certificates
	I0702 21:37:09.522955    8914 provision.go:84] configureAuth start
	I0702 21:37:09.522959    8914 provision.go:143] copyHostCerts
	I0702 21:37:09.523039    8914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19184-6175/.minikube/key.pem, removing ...
	I0702 21:37:09.523045    8914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19184-6175/.minikube/key.pem
	I0702 21:37:09.523571    8914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19184-6175/.minikube/key.pem (1675 bytes)
	I0702 21:37:09.523779    8914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.pem, removing ...
	I0702 21:37:09.523782    8914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.pem
	I0702 21:37:09.523833    8914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.pem (1078 bytes)
	I0702 21:37:09.523938    8914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19184-6175/.minikube/cert.pem, removing ...
	I0702 21:37:09.523942    8914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19184-6175/.minikube/cert.pem
	I0702 21:37:09.523987    8914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19184-6175/.minikube/cert.pem (1123 bytes)
	I0702 21:37:09.524076    8914 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-896000 san=[127.0.0.1 localhost minikube stopped-upgrade-896000]
	I0702 21:37:09.005947    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:09.006415    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:37:09.045918    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:37:09.046043    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:37:09.068073    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:37:09.068189    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:37:09.084235    8323 logs.go:276] 2 containers: [61261c440964 0033d4e81390]
	I0702 21:37:09.084317    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:37:09.096792    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:37:09.096863    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:37:09.107289    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:37:09.107347    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:37:09.117573    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:37:09.117641    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:37:09.127747    8323 logs.go:276] 0 containers: []
	W0702 21:37:09.127762    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:37:09.127813    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:37:09.138258    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:37:09.138272    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:37:09.138279    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:37:09.153495    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:37:09.153508    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:37:09.165098    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:37:09.165109    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:37:09.169546    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:37:09.169554    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:37:09.189213    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:37:09.189224    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:37:09.202944    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:37:09.202957    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:37:09.214445    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:37:09.214459    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:37:09.235371    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:37:09.235381    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:37:09.261269    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:37:09.261277    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:37:09.272562    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:37:09.272575    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:37:09.290593    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:09.290684    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:09.306893    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:37:09.306900    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:37:09.342004    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:37:09.342016    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:37:09.353411    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:37:09.353420    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:37:09.373815    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:09.373823    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:37:09.373852    8323 out.go:239] X Problems detected in kubelet:
	W0702 21:37:09.373862    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:09.373866    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:09.373872    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:09.373874    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:37:09.602981    8914 provision.go:177] copyRemoteCerts
	I0702 21:37:09.603024    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0702 21:37:09.603032    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/id_rsa Username:docker}
	I0702 21:37:09.631131    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0702 21:37:09.638408    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0702 21:37:09.646965    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0702 21:37:09.653969    8914 provision.go:87] duration metric: took 131.006541ms to configureAuth
	I0702 21:37:09.653978    8914 buildroot.go:189] setting minikube options for container-runtime
	I0702 21:37:09.654098    8914 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:37:09.654134    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:09.654220    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:09.654225    8914 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0702 21:37:09.704826    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0702 21:37:09.704834    8914 buildroot.go:70] root file system type: tmpfs
	I0702 21:37:09.704884    8914 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0702 21:37:09.704931    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:09.705036    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:09.705068    8914 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0702 21:37:09.760269    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0702 21:37:09.760315    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:09.760425    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:09.760455    8914 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0702 21:37:10.147078    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0702 21:37:10.147090    8914 machine.go:97] duration metric: took 781.260584ms to provisionDockerMachine
	I0702 21:37:10.147097    8914 start.go:293] postStartSetup for "stopped-upgrade-896000" (driver="qemu2")
	I0702 21:37:10.147104    8914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0702 21:37:10.147167    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0702 21:37:10.147178    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/id_rsa Username:docker}
	I0702 21:37:10.173380    8914 ssh_runner.go:195] Run: cat /etc/os-release
	I0702 21:37:10.175019    8914 info.go:137] Remote host: Buildroot 2021.02.12
	I0702 21:37:10.175028    8914 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19184-6175/.minikube/addons for local assets ...
	I0702 21:37:10.175124    8914 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19184-6175/.minikube/files for local assets ...
	I0702 21:37:10.175262    8914 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19184-6175/.minikube/files/etc/ssl/certs/66692.pem -> 66692.pem in /etc/ssl/certs
	I0702 21:37:10.175405    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0702 21:37:10.178507    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/files/etc/ssl/certs/66692.pem --> /etc/ssl/certs/66692.pem (1708 bytes)
	I0702 21:37:10.185302    8914 start.go:296] duration metric: took 38.1995ms for postStartSetup
	I0702 21:37:10.185316    8914 fix.go:56] duration metric: took 20.501106125s for fixHost
	I0702 21:37:10.185359    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:10.185474    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:10.185481    8914 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0702 21:37:10.235705    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719981430.687783629
	
	I0702 21:37:10.235714    8914 fix.go:216] guest clock: 1719981430.687783629
	I0702 21:37:10.235718    8914 fix.go:229] Guest: 2024-07-02 21:37:10.687783629 -0700 PDT Remote: 2024-07-02 21:37:10.185317 -0700 PDT m=+20.621860376 (delta=502.466629ms)
	I0702 21:37:10.235736    8914 fix.go:200] guest clock delta is within tolerance: 502.466629ms
	I0702 21:37:10.235739    8914 start.go:83] releasing machines lock for "stopped-upgrade-896000", held for 20.55153875s
	I0702 21:37:10.235792    8914 ssh_runner.go:195] Run: cat /version.json
	I0702 21:37:10.235801    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/id_rsa Username:docker}
	I0702 21:37:10.235815    8914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0702 21:37:10.235836    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/id_rsa Username:docker}
	W0702 21:37:10.236300    8914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51581->127.0.0.1:51457: write: broken pipe
	I0702 21:37:10.236319    8914 retry.go:31] will retry after 165.268788ms: ssh: handshake failed: write tcp 127.0.0.1:51581->127.0.0.1:51457: write: broken pipe
	W0702 21:37:10.261369    8914 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0702 21:37:10.261414    8914 ssh_runner.go:195] Run: systemctl --version
	I0702 21:37:10.263086    8914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0702 21:37:10.264535    8914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0702 21:37:10.264562    8914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0702 21:37:10.267558    8914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0702 21:37:10.272269    8914 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0702 21:37:10.272283    8914 start.go:494] detecting cgroup driver to use...
	I0702 21:37:10.272360    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0702 21:37:10.279090    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0702 21:37:10.282518    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0702 21:37:10.285399    8914 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0702 21:37:10.285425    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0702 21:37:10.288281    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0702 21:37:10.291620    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0702 21:37:10.294873    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0702 21:37:10.298178    8914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0702 21:37:10.300943    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0702 21:37:10.303883    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0702 21:37:10.307125    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0702 21:37:10.310423    8914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0702 21:37:10.312883    8914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0702 21:37:10.315816    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:37:10.403266    8914 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0702 21:37:10.409548    8914 start.go:494] detecting cgroup driver to use...
	I0702 21:37:10.409612    8914 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0702 21:37:10.418217    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0702 21:37:10.423827    8914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0702 21:37:10.435035    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0702 21:37:10.476933    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0702 21:37:10.481901    8914 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0702 21:37:10.540972    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0702 21:37:10.546806    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0702 21:37:10.552585    8914 ssh_runner.go:195] Run: which cri-dockerd
	I0702 21:37:10.554035    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0702 21:37:10.556795    8914 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0702 21:37:10.561728    8914 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0702 21:37:10.637621    8914 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0702 21:37:10.716401    8914 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0702 21:37:10.716466    8914 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0702 21:37:10.721741    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:37:10.790192    8914 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0702 21:37:11.954857    8914 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.16466875s)
	I0702 21:37:11.954931    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0702 21:37:11.959787    8914 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0702 21:37:11.966083    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0702 21:37:11.970325    8914 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0702 21:37:12.037692    8914 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0702 21:37:12.113917    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:37:12.192822    8914 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0702 21:37:12.198557    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0702 21:37:12.202685    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:37:12.265813    8914 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0702 21:37:12.305004    8914 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0702 21:37:12.305102    8914 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0702 21:37:12.307098    8914 start.go:562] Will wait 60s for crictl version
	I0702 21:37:12.307153    8914 ssh_runner.go:195] Run: which crictl
	I0702 21:37:12.308963    8914 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0702 21:37:12.323739    8914 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0702 21:37:12.323804    8914 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0702 21:37:12.340501    8914 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0702 21:37:12.360325    8914 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0702 21:37:12.360447    8914 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0702 21:37:12.361669    8914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0702 21:37:12.365029    8914 kubeadm.go:877] updating cluster {Name:stopped-upgrade-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0702 21:37:12.365076    8914 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0702 21:37:12.365129    8914 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0702 21:37:12.375256    8914 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0702 21:37:12.375266    8914 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0702 21:37:12.375308    8914 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0702 21:37:12.378757    8914 ssh_runner.go:195] Run: which lz4
	I0702 21:37:12.380000    8914 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0702 21:37:12.381252    8914 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0702 21:37:12.381264    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0702 21:37:13.319555    8914 docker.go:649] duration metric: took 939.60225ms to copy over tarball
	I0702 21:37:13.319614    8914 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0702 21:37:14.471990    8914 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.152384583s)
	I0702 21:37:14.472004    8914 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0702 21:37:14.487922    8914 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0702 21:37:14.491636    8914 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0702 21:37:14.497009    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:37:14.574165    8914 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0702 21:37:16.078019    8914 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.503866708s)
	I0702 21:37:16.078127    8914 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0702 21:37:16.088962    8914 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0702 21:37:16.088972    8914 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0702 21:37:16.088976    8914 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0702 21:37:16.094550    8914 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:37:16.096862    8914 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:37:16.098529    8914 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:37:16.098620    8914 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:37:16.100818    8914 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:37:16.100882    8914 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:37:16.102211    8914 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:37:16.102230    8914 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:37:16.103415    8914 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0702 21:37:16.103421    8914 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:37:16.104650    8914 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:37:16.104660    8914 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0702 21:37:16.105660    8914 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0702 21:37:16.105867    8914 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:37:16.106876    8914 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0702 21:37:16.107530    8914 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:37:16.558714    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:37:16.570019    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:37:16.571010    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:37:16.572581    8914 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0702 21:37:16.572606    8914 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:37:16.572643    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:37:16.577558    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:37:16.589605    8914 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0702 21:37:16.589627    8914 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:37:16.589682    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:37:16.591837    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0702 21:37:16.593154    8914 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0702 21:37:16.593170    8914 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:37:16.593201    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:37:16.594352    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0702 21:37:16.597568    8914 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0702 21:37:16.597586    8914 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:37:16.597633    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:37:16.605926    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0702 21:37:16.613428    8914 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0702 21:37:16.613446    8914 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0702 21:37:16.613496    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0702 21:37:16.614250    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0702 21:37:16.630176    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0702 21:37:16.630204    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0702 21:37:16.630319    8914 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0702 21:37:16.631495    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0702 21:37:16.632808    8914 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0702 21:37:16.632827    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0702 21:37:16.641665    8914 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0702 21:37:16.641685    8914 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0702 21:37:16.641737    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0702 21:37:16.647138    8914 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0702 21:37:16.647270    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:37:16.649226    8914 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0702 21:37:16.649236    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0702 21:37:16.662509    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0702 21:37:16.662611    8914 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0702 21:37:16.677566    8914 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0702 21:37:16.677587    8914 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:37:16.677643    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:37:16.694247    8914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0702 21:37:16.694273    8914 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0702 21:37:16.694300    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0702 21:37:16.694311    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0702 21:37:16.694409    8914 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0702 21:37:16.701336    8914 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0702 21:37:16.701415    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0702 21:37:16.713209    8914 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0702 21:37:16.713313    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:37:16.763373    8914 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0702 21:37:16.763387    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0702 21:37:16.787395    8914 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0702 21:37:16.787481    8914 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:37:16.787561    8914 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:37:16.849550    8914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0702 21:37:16.853152    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0702 21:37:16.853276    8914 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0702 21:37:16.859942    8914 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0702 21:37:16.859974    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0702 21:37:16.935959    8914 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0702 21:37:16.935973    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0702 21:37:17.288879    8914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0702 21:37:17.288903    8914 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0702 21:37:17.288908    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0702 21:37:17.425392    8914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0702 21:37:17.425437    8914 cache_images.go:92] duration metric: took 1.336479375s to LoadCachedImages
	W0702 21:37:17.425487    8914 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0702 21:37:17.425496    8914 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0702 21:37:17.425552    8914 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-896000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0702 21:37:17.425620    8914 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0702 21:37:17.438951    8914 cni.go:84] Creating CNI manager for ""
	I0702 21:37:17.438962    8914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:37:17.438967    8914 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0702 21:37:17.438976    8914 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-896000 NodeName:stopped-upgrade-896000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0702 21:37:17.439051    8914 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-896000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0702 21:37:17.439101    8914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0702 21:37:17.442147    8914 binaries.go:44] Found k8s binaries, skipping transfer
	I0702 21:37:17.442176    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0702 21:37:17.444767    8914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0702 21:37:17.449591    8914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0702 21:37:17.454689    8914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0702 21:37:17.460432    8914 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0702 21:37:17.461702    8914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0702 21:37:17.464990    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:37:17.546611    8914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0702 21:37:17.553622    8914 certs.go:68] Setting up /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000 for IP: 10.0.2.15
	I0702 21:37:17.553638    8914 certs.go:194] generating shared ca certs ...
	I0702 21:37:17.553647    8914 certs.go:226] acquiring lock for ca certs: {Name:mk1563fd1929f66ff1d36559bceb7dd892d19aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:37:17.553823    8914 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.key
	I0702 21:37:17.553876    8914 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/proxy-client-ca.key
	I0702 21:37:17.553883    8914 certs.go:256] generating profile certs ...
	I0702 21:37:17.553960    8914 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/client.key
	I0702 21:37:17.553979    8914 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.key.c154573e
	I0702 21:37:17.553988    8914 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.crt.c154573e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0702 21:37:17.701173    8914 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.crt.c154573e ...
	I0702 21:37:17.701189    8914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.crt.c154573e: {Name:mkffc538c553c82411cd7a5e2a9f64584d49fa3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:37:17.701589    8914 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.key.c154573e ...
	I0702 21:37:17.701596    8914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.key.c154573e: {Name:mkb1593eec78c3bae310795eeae3428ed268c95b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:37:17.701739    8914 certs.go:381] copying /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.crt.c154573e -> /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.crt
	I0702 21:37:17.701876    8914 certs.go:385] copying /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.key.c154573e -> /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.key
	I0702 21:37:17.702031    8914 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/proxy-client.key
	I0702 21:37:17.702169    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/6669.pem (1338 bytes)
	W0702 21:37:17.702203    8914 certs.go:480] ignoring /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/6669_empty.pem, impossibly tiny 0 bytes
	I0702 21:37:17.702212    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca-key.pem (1675 bytes)
	I0702 21:37:17.702235    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem (1078 bytes)
	I0702 21:37:17.702263    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem (1123 bytes)
	I0702 21:37:17.702288    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/key.pem (1675 bytes)
	I0702 21:37:17.702325    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/files/etc/ssl/certs/66692.pem (1708 bytes)
	I0702 21:37:17.702689    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0702 21:37:17.709452    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0702 21:37:17.715963    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0702 21:37:17.723269    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0702 21:37:17.732128    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0702 21:37:17.739322    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0702 21:37:17.746601    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0702 21:37:17.753853    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0702 21:37:17.760594    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/files/etc/ssl/certs/66692.pem --> /usr/share/ca-certificates/66692.pem (1708 bytes)
	I0702 21:37:17.767530    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0702 21:37:17.774573    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/6669.pem --> /usr/share/ca-certificates/6669.pem (1338 bytes)
	I0702 21:37:17.781225    8914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0702 21:37:17.785952    8914 ssh_runner.go:195] Run: openssl version
	I0702 21:37:17.787902    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0702 21:37:17.791361    8914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0702 21:37:17.792850    8914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 04:30 /usr/share/ca-certificates/minikubeCA.pem
	I0702 21:37:17.792872    8914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0702 21:37:17.794593    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0702 21:37:17.797462    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6669.pem && ln -fs /usr/share/ca-certificates/6669.pem /etc/ssl/certs/6669.pem"
	I0702 21:37:17.800526    8914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6669.pem
	I0702 21:37:17.802042    8914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 04:19 /usr/share/ca-certificates/6669.pem
	I0702 21:37:17.802061    8914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6669.pem
	I0702 21:37:17.803770    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6669.pem /etc/ssl/certs/51391683.0"
	I0702 21:37:17.807173    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/66692.pem && ln -fs /usr/share/ca-certificates/66692.pem /etc/ssl/certs/66692.pem"
	I0702 21:37:17.810273    8914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/66692.pem
	I0702 21:37:17.811639    8914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 04:19 /usr/share/ca-certificates/66692.pem
	I0702 21:37:17.811660    8914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/66692.pem
	I0702 21:37:17.813485    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/66692.pem /etc/ssl/certs/3ec20f2e.0"
	I0702 21:37:17.816482    8914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0702 21:37:17.818071    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0702 21:37:17.820040    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0702 21:37:17.821929    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0702 21:37:17.825163    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0702 21:37:17.826832    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0702 21:37:17.828557    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0702 21:37:17.830651    8914 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0702 21:37:17.830729    8914 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0702 21:37:17.841146    8914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0702 21:37:17.844285    8914 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0702 21:37:17.844292    8914 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0702 21:37:17.844295    8914 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0702 21:37:17.844320    8914 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0702 21:37:17.847748    8914 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0702 21:37:17.848060    8914 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-896000" does not appear in /Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:37:17.848181    8914 kubeconfig.go:62] /Users/jenkins/minikube-integration/19184-6175/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-896000" cluster setting kubeconfig missing "stopped-upgrade-896000" context setting]
	I0702 21:37:17.848383    8914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/kubeconfig: {Name:mk27cb7c8451cb331bdc98ce6310b0b3aba92b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:37:17.848808    8914 kapi.go:59] client config for stopped-upgrade-896000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/client.key", CAFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101e21a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0702 21:37:17.849144    8914 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0702 21:37:17.852248    8914 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-896000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0702 21:37:17.852255    8914 kubeadm.go:1154] stopping kube-system containers ...
	I0702 21:37:17.852301    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0702 21:37:17.863539    8914 docker.go:483] Stopping containers: [ada7e661f58d 5162823a6147 82726302ecd9 80469431360e 866bbe2600ef ca658153f418 29fd0adefccd 5cbc16914f5c]
	I0702 21:37:17.863605    8914 ssh_runner.go:195] Run: docker stop ada7e661f58d 5162823a6147 82726302ecd9 80469431360e 866bbe2600ef ca658153f418 29fd0adefccd 5cbc16914f5c
	I0702 21:37:17.874403    8914 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0702 21:37:17.879712    8914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0702 21:37:17.882786    8914 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0702 21:37:17.882792    8914 kubeadm.go:156] found existing configuration files:
	
	I0702 21:37:17.882815    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/admin.conf
	I0702 21:37:17.885213    8914 kubeadm.go:162] "https://control-plane.minikube.internal:51493" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0702 21:37:17.885236    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0702 21:37:17.888074    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/kubelet.conf
	I0702 21:37:17.891029    8914 kubeadm.go:162] "https://control-plane.minikube.internal:51493" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0702 21:37:17.891064    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0702 21:37:17.894058    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/controller-manager.conf
	I0702 21:37:17.896554    8914 kubeadm.go:162] "https://control-plane.minikube.internal:51493" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0702 21:37:17.896575    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0702 21:37:17.899548    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/scheduler.conf
	I0702 21:37:17.902350    8914 kubeadm.go:162] "https://control-plane.minikube.internal:51493" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0702 21:37:17.902382    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0702 21:37:17.904773    8914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0702 21:37:17.907847    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:37:17.932109    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:37:18.466144    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:37:18.596840    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:37:18.626829    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:37:18.649966    8914 api_server.go:52] waiting for apiserver process to appear ...
	I0702 21:37:18.650040    8914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0702 21:37:19.152129    8914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0702 21:37:19.375872    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:19.652131    8914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0702 21:37:19.656347    8914 api_server.go:72] duration metric: took 1.006401458s to wait for apiserver process to appear ...
	I0702 21:37:19.656355    8914 api_server.go:88] waiting for apiserver healthz status ...
	I0702 21:37:19.656369    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:24.378145    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:24.378529    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:37:24.413415    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:37:24.413545    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:37:24.432063    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:37:24.432161    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:37:24.446478    8323 logs.go:276] 2 containers: [61261c440964 0033d4e81390]
	I0702 21:37:24.446554    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:37:24.459790    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:37:24.459862    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:37:24.470340    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:37:24.470405    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:37:24.481568    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:37:24.481633    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:37:24.493422    8323 logs.go:276] 0 containers: []
	W0702 21:37:24.493434    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:37:24.493494    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:37:24.504433    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:37:24.504449    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:37:24.504454    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:37:24.522486    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:37:24.522496    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:37:24.537986    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:37:24.537996    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:37:24.555381    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:37:24.555391    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:37:24.567127    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:37:24.567140    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:37:24.592631    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:37:24.592646    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:37:24.605566    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:37:24.605578    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:37:24.618747    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:37:24.618758    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:37:24.635543    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:24.635634    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:24.652126    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:37:24.652134    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:37:24.657513    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:37:24.657524    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:37:24.692165    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:37:24.692179    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:37:24.706857    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:37:24.706868    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:37:24.726279    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:37:24.726290    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:37:24.737948    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:24.737963    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:37:24.737988    8323 out.go:239] X Problems detected in kubelet:
	W0702 21:37:24.737993    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:24.737996    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:24.738001    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:24.738004    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:37:24.658316    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:24.658330    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:29.658520    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:29.658557    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:34.740838    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:34.658843    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:34.658862    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:39.742962    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:39.743119    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:37:39.766524    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:37:39.766606    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:37:39.779513    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:37:39.779585    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:37:39.793943    8323 logs.go:276] 3 containers: [ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:37:39.794017    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:37:39.806349    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:37:39.806429    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:37:39.816817    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:37:39.816889    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:37:39.827237    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:37:39.827303    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:37:39.837437    8323 logs.go:276] 0 containers: []
	W0702 21:37:39.837447    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:37:39.837505    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:37:39.858044    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:37:39.858063    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:37:39.858068    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:37:39.874762    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:39.874853    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:39.891082    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:37:39.891089    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:37:39.905473    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:37:39.905483    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:37:39.924001    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:37:39.924012    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:37:39.928527    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:37:39.928536    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:37:39.939666    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:37:39.939678    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:37:39.951995    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:37:39.952007    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:37:39.964432    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:37:39.964443    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:37:39.977009    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:37:39.977019    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:37:40.011777    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:37:40.011789    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:37:40.029240    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:37:40.029254    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:37:40.043328    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:37:40.043339    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:37:40.068741    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:37:40.068751    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:37:40.079807    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:37:40.079817    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:37:40.100944    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:40.100953    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:37:40.100980    8323 out.go:239] X Problems detected in kubelet:
	W0702 21:37:40.100984    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:40.100988    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:40.100992    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:40.100994    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:37:39.659324    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:39.659388    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:44.660163    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:44.660240    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:50.103888    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:49.661271    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:49.661294    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:55.106011    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:55.106193    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:37:55.123904    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:37:55.123984    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:37:55.136954    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:37:55.137028    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:37:55.147952    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:37:55.148017    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:37:55.158663    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:37:55.158720    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:37:55.168713    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:37:55.168780    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:37:55.179672    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:37:55.179740    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:37:55.192779    8323 logs.go:276] 0 containers: []
	W0702 21:37:55.192789    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:37:55.192841    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:37:55.203723    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:37:55.203742    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:37:55.203747    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:37:55.220107    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:37:55.220118    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:37:55.239162    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:37:55.239174    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:37:55.274339    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:37:55.274349    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:37:55.286321    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:37:55.286332    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:37:55.309868    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:37:55.309876    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:37:55.326588    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:55.326685    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:55.343018    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:37:55.343024    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:37:55.348003    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:37:55.348011    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:37:55.362743    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:37:55.362755    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:37:55.380638    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:37:55.380648    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:37:55.391891    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:37:55.391904    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:37:55.413004    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:37:55.413015    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:37:55.428377    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:37:55.428392    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:37:55.439971    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:37:55.439981    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:37:55.458044    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:37:55.458054    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:37:55.469860    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:55.469873    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:37:55.469905    8323 out.go:239] X Problems detected in kubelet:
	W0702 21:37:55.469910    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:37:55.469914    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:37:55.469920    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:37:55.469923    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:37:54.662393    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:54.662435    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:59.663990    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:59.664038    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:05.473581    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:04.665933    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:04.665975    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:10.475956    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:10.476328    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:10.507654    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:38:10.507786    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:10.525567    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:38:10.525665    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:10.538988    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:38:10.539057    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:10.550693    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:38:10.550762    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:10.561491    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:38:10.561561    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:10.572256    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:38:10.572326    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:10.582767    8323 logs.go:276] 0 containers: []
	W0702 21:38:10.582777    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:10.582832    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:10.593562    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:38:10.593578    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:38:10.593583    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:38:10.605418    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:38:10.605429    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:38:10.620656    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:38:10.620667    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:38:10.632287    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:38:10.632297    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:38:10.644063    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:10.644073    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:10.648825    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:10.648834    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:10.683425    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:38:10.683436    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:38:10.698617    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:38:10.698630    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:38:10.710925    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:10.710935    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:10.734539    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:10.734549    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:38:10.750571    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:10.750662    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:10.767153    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:38:10.767157    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:38:10.778044    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:38:10.778056    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:38:10.789421    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:38:10.789434    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:38:10.808579    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:38:10.808589    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:38:10.826310    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:38:10.826320    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:10.838149    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:10.838159    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:38:10.838184    8323 out.go:239] X Problems detected in kubelet:
	W0702 21:38:10.838189    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:10.838193    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:10.838197    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:10.838200    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:38:09.668179    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:09.668201    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:14.669315    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:14.669356    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:20.842122    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:19.671506    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:19.671622    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:19.684014    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:38:19.684092    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:19.695053    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:38:19.695122    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:19.705756    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:38:19.705818    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:19.716694    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:38:19.716771    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:19.727515    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:38:19.727588    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:19.738100    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:38:19.738167    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:19.748345    8914 logs.go:276] 0 containers: []
	W0702 21:38:19.748354    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:19.748404    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:19.765888    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:38:19.765904    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:38:19.765910    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:38:19.790271    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:38:19.790283    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:38:19.806788    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:38:19.806799    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:38:19.824425    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:38:19.824434    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:38:19.836281    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:38:19.836293    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:38:19.857400    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:38:19.857412    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:38:19.871867    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:38:19.871880    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:38:19.886921    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:19.886934    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:19.891736    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:19.891742    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:19.992009    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:38:19.992020    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:38:20.023740    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:38:20.023767    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:38:20.038040    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:38:20.038051    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:38:20.049672    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:20.049685    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:20.074632    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:20.074641    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:38:20.111689    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:38:20.111699    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:38:20.123043    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:38:20.123054    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:38:20.133983    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:38:20.133992    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:22.648319    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:25.844511    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:25.844930    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:25.889646    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:38:25.889788    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:25.910644    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:38:25.910743    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:25.937119    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:38:25.937191    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:25.948488    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:38:25.948549    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:25.960079    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:38:25.960153    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:25.972023    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:38:25.972090    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:25.982714    8323 logs.go:276] 0 containers: []
	W0702 21:38:25.982725    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:25.982783    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:25.995148    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:38:25.995165    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:38:25.995171    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:38:26.033307    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:38:26.033318    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:38:26.045709    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:38:26.045723    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:26.058031    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:26.058042    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:38:26.074777    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:26.074871    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:26.091601    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:26.091612    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:26.096206    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:38:26.096213    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:38:26.110533    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:38:26.110544    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:38:26.122545    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:38:26.122554    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:38:26.138004    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:38:26.138015    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:38:26.155779    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:26.155790    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:26.189648    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:38:26.189661    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:38:26.206018    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:38:26.206027    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:38:26.219924    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:38:26.219937    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:38:26.232399    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:38:26.232412    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:38:26.244320    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:26.244329    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:26.268263    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:26.268274    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:38:26.268303    8323 out.go:239] X Problems detected in kubelet:
	W0702 21:38:26.268309    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:26.268313    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:26.268317    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:26.268319    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:38:27.650631    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:27.650814    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:27.675313    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:38:27.675415    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:27.690365    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:38:27.690445    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:27.704451    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:38:27.704523    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:27.715037    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:38:27.715114    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:27.725280    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:38:27.725349    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:27.735827    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:38:27.735893    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:27.746103    8914 logs.go:276] 0 containers: []
	W0702 21:38:27.746112    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:27.746165    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:27.761305    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:38:27.761325    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:27.761330    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:27.786781    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:38:27.786790    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:27.798661    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:38:27.798676    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:38:27.813280    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:38:27.813290    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:38:27.824803    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:38:27.824814    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:38:27.841977    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:38:27.841987    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:38:27.856591    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:38:27.856601    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:38:27.868188    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:38:27.868200    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:38:27.888782    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:38:27.888792    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:38:27.900988    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:38:27.900999    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:38:27.912205    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:27.912216    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:38:27.949625    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:27.949636    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:27.953888    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:38:27.953894    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:38:27.968661    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:38:27.968672    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:38:27.979766    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:27.979978    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:28.016811    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:38:28.016823    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:38:28.042401    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:38:28.042415    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:38:30.558069    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:36.272203    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:35.560233    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:35.560510    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:35.575488    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:38:35.575576    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:35.587715    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:38:35.587791    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:35.602890    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:38:35.602961    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:35.613726    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:38:35.613794    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:35.624255    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:38:35.624319    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:35.634510    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:38:35.634581    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:35.644875    8914 logs.go:276] 0 containers: []
	W0702 21:38:35.644891    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:35.644950    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:35.655315    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:38:35.655335    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:38:35.655340    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:38:35.667353    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:38:35.667365    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:38:35.680814    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:38:35.680825    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:38:35.692051    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:38:35.692063    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:38:35.702686    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:38:35.702698    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:38:35.718049    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:38:35.718060    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:35.729770    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:38:35.729780    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:38:35.748011    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:38:35.748021    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:38:35.772343    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:38:35.772353    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:38:35.786399    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:38:35.786408    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:38:35.797908    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:38:35.797918    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:38:35.814956    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:38:35.814966    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:38:35.827147    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:35.827156    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:35.852495    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:35.852504    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:35.857158    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:38:35.857165    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:38:35.870868    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:35.870878    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:38:35.908777    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:35.908785    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:38.449040    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:41.274279    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:41.274394    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:41.286747    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:38:41.286821    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:41.297005    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:38:41.297075    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:41.307981    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:38:41.308053    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:41.318591    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:38:41.318668    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:41.328762    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:38:41.328824    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:41.348351    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:38:41.348409    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:41.359023    8323 logs.go:276] 0 containers: []
	W0702 21:38:41.359037    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:41.359099    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:41.369312    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:38:41.369331    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:41.369336    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:38:41.387328    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:41.387419    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:41.404157    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:38:41.404164    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:38:41.415409    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:38:41.415418    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:38:41.433699    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:41.433708    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:41.458666    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:38:41.458672    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:38:41.471414    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:38:41.471425    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:38:41.485117    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:41.485129    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:41.526170    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:38:41.526187    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:38:43.451240    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:43.451451    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:43.475235    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:38:43.475321    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:43.487900    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:38:43.487977    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:43.498360    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:38:43.498435    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:43.509114    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:38:43.509190    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:43.519738    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:38:43.519812    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:43.530489    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:38:43.530564    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:43.542104    8914 logs.go:276] 0 containers: []
	W0702 21:38:43.542116    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:43.542201    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:43.561708    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:38:43.561726    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:38:43.561731    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:38:43.587281    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:38:43.587293    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:38:43.598828    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:38:43.598838    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:38:43.614745    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:38:43.614757    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:38:43.627867    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:38:43.627877    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:38:43.641549    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:43.641560    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:43.665966    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:38:43.665974    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:43.677377    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:38:43.677388    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:38:43.691447    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:43.691456    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:43.730325    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:38:43.730335    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:38:43.744078    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:38:43.744088    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:38:43.758213    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:38:43.758224    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:38:43.769922    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:43.769932    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:43.773940    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:38:43.773945    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:38:43.789425    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:38:43.789437    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:38:43.807526    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:38:43.807536    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:38:43.823125    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:43.823135    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:38:41.542021    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:41.542031    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:41.547006    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:38:41.547015    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:38:41.565296    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:38:41.565306    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:38:41.579534    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:38:41.579546    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:38:41.591713    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:38:41.591722    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:38:41.603506    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:38:41.603517    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:38:41.615543    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:38:41.615553    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:41.627591    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:41.627601    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:38:41.627626    8323 out.go:239] X Problems detected in kubelet:
	W0702 21:38:41.627632    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:41.627638    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:41.627643    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:41.627647    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:38:46.361850    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:51.364102    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:51.364232    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:51.377214    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:38:51.377288    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:51.388019    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:38:51.388086    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:51.398472    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:38:51.398534    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:51.409106    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:38:51.409183    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:51.419178    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:38:51.419243    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:51.435093    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:38:51.435168    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:51.445219    8914 logs.go:276] 0 containers: []
	W0702 21:38:51.445230    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:51.445285    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:51.455569    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:38:51.455588    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:51.455595    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:38:51.495596    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:51.495605    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:51.499862    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:51.499874    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:51.523974    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:38:51.523981    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:38:51.545306    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:38:51.545316    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:38:51.563380    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:38:51.563390    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:38:51.576259    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:38:51.576269    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:38:51.595788    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:38:51.595799    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:38:51.609781    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:38:51.609792    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:38:51.634848    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:38:51.634858    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:38:51.647192    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:38:51.647205    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:38:51.662224    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:38:51.662234    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:38:51.674474    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:38:51.674485    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:51.686798    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:51.686811    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:51.728507    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:38:51.728520    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:38:51.749077    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:38:51.749086    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:38:51.760693    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:38:51.760708    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:38:54.274038    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:51.630275    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:59.276242    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:59.276339    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:59.288738    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:38:59.288803    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:59.299649    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:38:59.299723    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:59.309617    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:38:59.309686    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:59.320008    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:38:59.320080    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:59.330455    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:38:59.330517    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:59.340983    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:38:59.341046    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:59.351052    8914 logs.go:276] 0 containers: []
	W0702 21:38:59.351065    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:59.351149    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:59.371649    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:38:59.371666    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:59.371671    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:59.375927    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:59.375934    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:59.410656    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:38:59.410667    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:38:59.424750    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:59.424765    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:59.449609    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:38:59.449617    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:38:59.467341    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:38:59.467351    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:38:59.483155    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:38:59.483165    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:38:59.498191    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:38:59.498204    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:38:59.511438    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:38:59.511450    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:38:59.522978    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:59.522988    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:38:59.559760    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:38:59.559768    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:38:59.573512    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:38:59.573524    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:38:59.584682    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:38:59.584698    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:38:56.632392    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:56.632588    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:56.651077    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:38:56.651162    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:56.664831    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:38:56.664910    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:56.676616    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:38:56.676682    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:56.686955    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:38:56.687026    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:56.697321    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:38:56.697387    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:56.707496    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:38:56.707569    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:56.717266    8323 logs.go:276] 0 containers: []
	W0702 21:38:56.717278    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:56.717333    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:56.728433    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:38:56.728452    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:38:56.728458    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:38:56.743991    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:56.744020    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:56.748387    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:56.748395    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:56.771431    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:38:56.771438    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:38:56.782577    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:56.782587    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:56.817489    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:38:56.817500    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:38:56.835696    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:38:56.835707    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:38:56.851500    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:38:56.851512    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:38:56.863450    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:38:56.863462    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:38:56.875662    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:38:56.875675    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:38:56.886916    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:56.886927    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:38:56.903748    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:56.903840    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:56.920608    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:38:56.920613    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:38:56.932289    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:38:56.932300    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:38:56.956129    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:38:56.956139    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:56.972783    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:38:56.972794    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:38:56.984826    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:56.984837    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:38:56.984866    8323 out.go:239] X Problems detected in kubelet:
	W0702 21:38:56.984872    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:38:56.984876    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:38:56.984881    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:38:56.984883    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:38:59.608185    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:38:59.608195    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:38:59.621952    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:38:59.621963    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:38:59.633753    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:38:59.633763    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:38:59.650568    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:38:59.650578    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:02.165141    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:07.166956    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:07.167151    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:07.185450    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:39:07.185544    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:07.201920    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:39:07.202005    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:07.213305    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:39:07.213382    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:07.223429    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:39:07.223495    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:07.235523    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:39:07.235600    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:07.247475    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:39:07.247543    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:07.258142    8914 logs.go:276] 0 containers: []
	W0702 21:39:07.258156    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:07.258214    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:07.269011    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:39:07.269035    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:39:07.269040    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:39:07.280091    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:39:07.280102    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:39:07.295173    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:39:07.295184    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:39:07.308629    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:39:07.308643    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:39:07.320500    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:39:07.320512    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:39:07.331779    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:39:07.331790    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:07.343134    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:39:07.343146    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:39:07.356964    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:39:07.356973    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:39:07.368178    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:07.368191    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:07.372491    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:39:07.372498    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:39:07.386526    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:07.386536    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:07.429511    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:39:07.429522    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:39:07.455431    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:39:07.455443    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:39:07.472251    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:39:07.472265    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:39:07.483845    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:39:07.483855    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:39:07.500954    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:07.500968    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:07.526013    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:07.526021    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:39:06.986563    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:10.066703    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:11.989268    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:11.989437    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:12.007559    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:39:12.007652    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:12.027965    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:39:12.028036    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:12.039250    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:39:12.039321    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:12.049642    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:39:12.049705    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:12.061601    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:39:12.061665    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:12.076639    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:39:12.076701    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:12.091058    8323 logs.go:276] 0 containers: []
	W0702 21:39:12.091072    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:12.091127    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:12.104835    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:39:12.104851    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:39:12.104857    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:39:12.125517    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:39:12.125530    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:12.137620    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:39:12.137633    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:39:12.153027    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:12.153039    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:12.176572    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:12.176587    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:39:12.193033    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:39:12.193123    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:39:12.209504    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:39:12.209512    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:39:12.220785    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:39:12.220796    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:39:12.236400    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:39:12.236410    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:39:12.248357    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:39:12.248371    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:39:12.260386    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:39:12.260400    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:39:12.273781    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:12.273792    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:12.278917    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:12.278926    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:12.313548    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:39:12.313563    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:39:12.327323    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:39:12.327333    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:39:12.339086    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:39:12.339098    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:39:12.350956    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:39:12.350967    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:39:12.350995    8323 out.go:239] X Problems detected in kubelet:
	W0702 21:39:12.351000    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:39:12.351003    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:39:12.351007    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:39:12.351010    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:39:15.068965    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:15.069210    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:15.092460    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:39:15.092562    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:15.108434    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:39:15.108514    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:15.122010    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:39:15.122085    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:15.132838    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:39:15.132911    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:15.143347    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:39:15.143412    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:15.154275    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:39:15.154349    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:15.165136    8914 logs.go:276] 0 containers: []
	W0702 21:39:15.165147    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:15.165201    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:15.179956    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:39:15.179977    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:39:15.179983    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:39:15.194123    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:39:15.194133    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:39:15.205910    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:39:15.205920    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:39:15.223277    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:39:15.223288    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:39:15.234531    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:39:15.234544    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:39:15.248409    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:39:15.248418    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:39:15.262020    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:15.262031    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:15.288749    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:39:15.288760    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:15.300806    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:39:15.300816    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:39:15.315510    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:15.315520    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:15.350261    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:39:15.350272    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:39:15.375829    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:39:15.375842    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:39:15.394369    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:15.394384    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:15.399626    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:39:15.399646    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:39:15.411906    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:39:15.411918    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:39:15.434491    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:39:15.434502    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:39:15.446306    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:15.446321    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:39:17.985896    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:22.988035    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:22.988217    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:23.003936    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:39:23.004015    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:23.016393    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:39:23.016464    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:23.026422    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:39:23.026494    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:23.037600    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:39:23.037665    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:23.047741    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:39:23.047806    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:23.059170    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:39:23.059234    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:23.069674    8914 logs.go:276] 0 containers: []
	W0702 21:39:23.069685    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:23.069738    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:23.080293    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:39:23.080313    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:23.080319    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:39:23.117102    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:23.117111    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:23.151362    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:39:23.151374    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:39:23.165441    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:39:23.165451    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:39:23.179566    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:23.179577    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:23.204853    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:39:23.204867    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:39:23.216901    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:23.216914    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:23.220977    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:39:23.220985    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:39:23.232482    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:39:23.232493    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:39:23.248586    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:39:23.248599    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:39:23.265903    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:39:23.265915    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:39:23.282469    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:39:23.282479    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:39:23.293995    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:39:23.294020    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:23.306240    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:39:23.306251    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:39:23.331509    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:39:23.331522    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:39:23.345823    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:39:23.345834    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:39:23.362259    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:39:23.362270    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:39:22.355003    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:25.880599    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:27.357644    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:27.358050    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:27.395521    8323 logs.go:276] 1 containers: [7a9072fd4040]
	I0702 21:39:27.395659    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:27.417625    8323 logs.go:276] 1 containers: [ad8bff9543c0]
	I0702 21:39:27.417735    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:27.432507    8323 logs.go:276] 4 containers: [00d0f2e17880 ca5987097fa1 61261c440964 0033d4e81390]
	I0702 21:39:27.432588    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:27.453851    8323 logs.go:276] 1 containers: [722b8e64335f]
	I0702 21:39:27.453929    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:27.472926    8323 logs.go:276] 1 containers: [a3a629c31cfb]
	I0702 21:39:27.473003    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:27.489411    8323 logs.go:276] 1 containers: [a8523ddcb6e3]
	I0702 21:39:27.489488    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:27.499712    8323 logs.go:276] 0 containers: []
	W0702 21:39:27.499725    8323 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:27.499785    8323 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:27.510443    8323 logs.go:276] 1 containers: [19f1810fd3bd]
	I0702 21:39:27.510462    8323 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:27.510467    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0702 21:39:27.527326    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:39:27.527418    8323 logs.go:138] Found kubelet problem: Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:39:27.543768    8323 logs.go:123] Gathering logs for coredns [00d0f2e17880] ...
	I0702 21:39:27.543775    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d0f2e17880"
	I0702 21:39:27.555920    8323 logs.go:123] Gathering logs for kube-controller-manager [a8523ddcb6e3] ...
	I0702 21:39:27.555934    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8523ddcb6e3"
	I0702 21:39:27.577906    8323 logs.go:123] Gathering logs for container status ...
	I0702 21:39:27.577917    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:27.589975    8323 logs.go:123] Gathering logs for etcd [ad8bff9543c0] ...
	I0702 21:39:27.589989    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad8bff9543c0"
	I0702 21:39:27.603589    8323 logs.go:123] Gathering logs for coredns [61261c440964] ...
	I0702 21:39:27.603602    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61261c440964"
	I0702 21:39:27.615187    8323 logs.go:123] Gathering logs for kube-proxy [a3a629c31cfb] ...
	I0702 21:39:27.615198    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a629c31cfb"
	I0702 21:39:27.628099    8323 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:27.628110    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:27.663094    8323 logs.go:123] Gathering logs for coredns [ca5987097fa1] ...
	I0702 21:39:27.663105    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca5987097fa1"
	I0702 21:39:27.675040    8323 logs.go:123] Gathering logs for coredns [0033d4e81390] ...
	I0702 21:39:27.675052    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0033d4e81390"
	I0702 21:39:27.686245    8323 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:27.686258    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:27.690648    8323 logs.go:123] Gathering logs for kube-apiserver [7a9072fd4040] ...
	I0702 21:39:27.690657    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9072fd4040"
	I0702 21:39:27.705469    8323 logs.go:123] Gathering logs for kube-scheduler [722b8e64335f] ...
	I0702 21:39:27.705479    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722b8e64335f"
	I0702 21:39:27.720941    8323 logs.go:123] Gathering logs for storage-provisioner [19f1810fd3bd] ...
	I0702 21:39:27.720950    8323 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19f1810fd3bd"
	I0702 21:39:27.734220    8323 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:27.734229    8323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:27.757062    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:39:27.757073    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0702 21:39:27.757103    8323 out.go:239] X Problems detected in kubelet:
	W0702 21:39:27.757107    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: W0703 04:31:44.146836    3896 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	W0702 21:39:27.757125    8323 out.go:239]   Jul 03 04:31:44 running-upgrade-908000 kubelet[3896]: E0703 04:31:44.146944    3896 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-908000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-908000' and this object
	I0702 21:39:27.757138    8323 out.go:304] Setting ErrFile to fd 2...
	I0702 21:39:27.757143    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:39:30.882965    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:30.883243    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:30.912143    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:39:30.912244    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:30.926727    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:39:30.926807    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:30.939242    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:39:30.939323    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:30.951622    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:39:30.951700    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:30.962067    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:39:30.962138    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:30.972589    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:39:30.972663    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:30.982784    8914 logs.go:276] 0 containers: []
	W0702 21:39:30.982794    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:30.982850    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:30.993484    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:39:30.993506    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:39:30.993511    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:39:31.007299    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:39:31.007309    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:39:31.019183    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:39:31.019192    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:39:31.030214    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:39:31.030224    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:39:31.045236    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:39:31.045245    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:39:31.063878    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:31.063888    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:39:31.102593    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:31.102604    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:31.107181    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:39:31.107187    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:39:31.120990    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:39:31.120999    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:39:31.132034    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:39:31.132046    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:39:31.149063    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:39:31.149072    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:39:31.166137    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:31.166147    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:31.200855    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:39:31.200865    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:39:31.225586    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:39:31.225597    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:39:31.240058    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:39:31.240068    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:39:31.252140    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:31.252150    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:31.277064    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:39:31.277074    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:33.792917    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:38.795201    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:38.795346    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:38.815092    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:39:38.815175    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:38.829485    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:39:38.829565    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:38.842557    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:39:38.842630    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:38.854295    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:39:38.854384    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:38.866283    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:39:38.866355    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:38.877909    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:39:38.878004    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:38.890570    8914 logs.go:276] 0 containers: []
	W0702 21:39:38.890581    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:38.890643    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:38.901771    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:39:38.901791    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:38.901798    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:39:38.939195    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:39:38.939205    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:39:38.964098    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:39:38.964108    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:39:38.981083    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:39:38.981094    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:39:38.997301    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:39:38.997312    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:39:39.009854    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:39:39.009864    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:39:39.027191    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:39:39.027201    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:39.039156    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:39.039168    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:39.043720    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:39:39.043729    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:39:39.061392    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:39:39.061401    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:39:39.074093    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:39:39.074105    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:39:39.085579    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:39.085589    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:39.120304    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:39:39.120316    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:39:39.135726    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:39:39.135743    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:39:39.153359    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:39:39.153373    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:39:39.168665    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:39:39.168682    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:39:39.179801    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:39.179811    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:37.760257    8323 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:42.762922    8323 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:42.768591    8323 out.go:177] 
	W0702 21:39:42.772567    8323 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0702 21:39:42.772585    8323 out.go:239] * 
	W0702 21:39:42.773778    8323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:39:42.783522    8323 out.go:177] 
	I0702 21:39:41.707153    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:46.709338    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:46.709514    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:46.725767    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:39:46.725855    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:46.738504    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:39:46.738577    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:46.749808    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:39:46.749878    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:46.767561    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:39:46.767632    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:46.780992    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:39:46.781068    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:46.795382    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:39:46.795453    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:46.805236    8914 logs.go:276] 0 containers: []
	W0702 21:39:46.805247    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:46.805307    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:46.818199    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:39:46.818243    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:46.818250    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:46.822714    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:39:46.822720    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:39:46.846168    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:39:46.846179    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:39:46.860298    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:39:46.860309    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:39:46.872068    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:39:46.872081    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:46.885074    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:39:46.885087    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:39:46.901334    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:39:46.901344    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:39:46.913508    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:39:46.913518    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:39:46.931029    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:46.931039    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:39:46.971084    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:39:46.971095    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:39:46.985400    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:39:46.985410    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:39:46.997289    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:46.997300    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:47.020712    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:47.020720    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:47.059426    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:39:47.059438    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:39:47.074447    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:39:47.074459    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:39:47.086061    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:39:47.086070    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:39:47.099105    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:39:47.099116    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:39:49.613676    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-07-03 04:30:33 UTC, ends at Wed 2024-07-03 04:39:58 UTC. --
	Jul 03 04:39:38 running-upgrade-908000 dockerd[3210]: time="2024-07-03T04:39:38.948470301Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/601a787b48e9983a7bfad12665c62af5701907ba3a2d92290b052d6c7b8cd9d2 pid=15597 runtime=io.containerd.runc.v2
	Jul 03 04:39:39 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:39Z" level=error msg="ContainerStats resp: {0x400090f4c0 linux}"
	Jul 03 04:39:39 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:39Z" level=error msg="ContainerStats resp: {0x40004f4b40 linux}"
	Jul 03 04:39:40 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:40Z" level=error msg="ContainerStats resp: {0x400051d5c0 linux}"
	Jul 03 04:39:41 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:41Z" level=error msg="ContainerStats resp: {0x400051ddc0 linux}"
	Jul 03 04:39:41 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:41Z" level=error msg="ContainerStats resp: {0x40000b7dc0 linux}"
	Jul 03 04:39:41 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:41Z" level=error msg="ContainerStats resp: {0x400089ed40 linux}"
	Jul 03 04:39:41 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:41Z" level=error msg="ContainerStats resp: {0x4000988140 linux}"
	Jul 03 04:39:41 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:41Z" level=error msg="ContainerStats resp: {0x4000988640 linux}"
	Jul 03 04:39:41 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:41Z" level=error msg="ContainerStats resp: {0x4000988980 linux}"
	Jul 03 04:39:41 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:41Z" level=error msg="ContainerStats resp: {0x400089f740 linux}"
	Jul 03 04:39:42 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:42Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 03 04:39:47 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:47Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 03 04:39:51 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:51Z" level=error msg="ContainerStats resp: {0x40007eb6c0 linux}"
	Jul 03 04:39:51 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:51Z" level=error msg="ContainerStats resp: {0x400090e5c0 linux}"
	Jul 03 04:39:52 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:52Z" level=error msg="ContainerStats resp: {0x40004f5300 linux}"
	Jul 03 04:39:52 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:52Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 03 04:39:53 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:53Z" level=error msg="ContainerStats resp: {0x400089e340 linux}"
	Jul 03 04:39:53 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:53Z" level=error msg="ContainerStats resp: {0x4000356f80 linux}"
	Jul 03 04:39:53 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:53Z" level=error msg="ContainerStats resp: {0x4000357440 linux}"
	Jul 03 04:39:53 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:53Z" level=error msg="ContainerStats resp: {0x4000357880 linux}"
	Jul 03 04:39:53 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:53Z" level=error msg="ContainerStats resp: {0x4000357d40 linux}"
	Jul 03 04:39:53 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:53Z" level=error msg="ContainerStats resp: {0x400094e140 linux}"
	Jul 03 04:39:53 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:53Z" level=error msg="ContainerStats resp: {0x400089fac0 linux}"
	Jul 03 04:39:57 running-upgrade-908000 cri-dockerd[3053]: time="2024-07-03T04:39:57Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	601a787b48e99       edaa71f2aee88       20 seconds ago      Running             coredns                   2                   59a55e43ebbb9
	df811312e1e48       edaa71f2aee88       29 seconds ago      Running             coredns                   2                   c242b9479301a
	00d0f2e17880b       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   59a55e43ebbb9
	ca5987097fa1f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   c242b9479301a
	19f1810fd3bdf       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   ea066b1f32287
	a3a629c31cfb8       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   92691a1820ca5
	722b8e64335fc       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   f785c36f59140
	a8523ddcb6e3e       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   772c3330632bb
	ad8bff9543c00       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   2d379d345585e
	7a9072fd4040f       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   631845d3573d7
	
	
	==> coredns [00d0f2e17880] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3036902819056584430.2931117665128672281. HINFO: read udp 10.244.0.3:36625->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3036902819056584430.2931117665128672281. HINFO: read udp 10.244.0.3:49934->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3036902819056584430.2931117665128672281. HINFO: read udp 10.244.0.3:46313->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3036902819056584430.2931117665128672281. HINFO: read udp 10.244.0.3:40673->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3036902819056584430.2931117665128672281. HINFO: read udp 10.244.0.3:60152->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3036902819056584430.2931117665128672281. HINFO: read udp 10.244.0.3:46292->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3036902819056584430.2931117665128672281. HINFO: read udp 10.244.0.3:49686->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3036902819056584430.2931117665128672281. HINFO: read udp 10.244.0.3:48180->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3036902819056584430.2931117665128672281. HINFO: read udp 10.244.0.3:51580->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3036902819056584430.2931117665128672281. HINFO: read udp 10.244.0.3:37637->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [601a787b48e9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5195281721569818026.7614100234801715435. HINFO: read udp 10.244.0.3:41020->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5195281721569818026.7614100234801715435. HINFO: read udp 10.244.0.3:37081->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5195281721569818026.7614100234801715435. HINFO: read udp 10.244.0.3:46069->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5195281721569818026.7614100234801715435. HINFO: read udp 10.244.0.3:42184->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5195281721569818026.7614100234801715435. HINFO: read udp 10.244.0.3:49688->10.0.2.3:53: i/o timeout
	
	
	==> coredns [ca5987097fa1] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7480158815317834816.4768187198314877889. HINFO: read udp 10.244.0.2:43695->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7480158815317834816.4768187198314877889. HINFO: read udp 10.244.0.2:49440->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7480158815317834816.4768187198314877889. HINFO: read udp 10.244.0.2:40944->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7480158815317834816.4768187198314877889. HINFO: read udp 10.244.0.2:33894->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7480158815317834816.4768187198314877889. HINFO: read udp 10.244.0.2:60215->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7480158815317834816.4768187198314877889. HINFO: read udp 10.244.0.2:60611->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7480158815317834816.4768187198314877889. HINFO: read udp 10.244.0.2:47682->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7480158815317834816.4768187198314877889. HINFO: read udp 10.244.0.2:52251->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7480158815317834816.4768187198314877889. HINFO: read udp 10.244.0.2:51257->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7480158815317834816.4768187198314877889. HINFO: read udp 10.244.0.2:47383->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [df811312e1e4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5401208067584017795.8745754348416854944. HINFO: read udp 10.244.0.2:34157->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5401208067584017795.8745754348416854944. HINFO: read udp 10.244.0.2:45936->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5401208067584017795.8745754348416854944. HINFO: read udp 10.244.0.2:43287->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5401208067584017795.8745754348416854944. HINFO: read udp 10.244.0.2:45262->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5401208067584017795.8745754348416854944. HINFO: read udp 10.244.0.2:43571->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5401208067584017795.8745754348416854944. HINFO: read udp 10.244.0.2:54139->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5401208067584017795.8745754348416854944. HINFO: read udp 10.244.0.2:47897->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5401208067584017795.8745754348416854944. HINFO: read udp 10.244.0.2:57983->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-908000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-908000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e34d4fd348f73f0f8af294cc2737aeb8da39e8d
	                    minikube.k8s.io/name=running-upgrade-908000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_02T21_35_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 04:35:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-908000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 04:39:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 04:35:38 +0000   Wed, 03 Jul 2024 04:35:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 04:35:38 +0000   Wed, 03 Jul 2024 04:35:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 04:35:38 +0000   Wed, 03 Jul 2024 04:35:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 04:35:38 +0000   Wed, 03 Jul 2024 04:35:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-908000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a847395e4164754bd517d76c15fd31a
	  System UUID:                4a847395e4164754bd517d76c15fd31a
	  Boot ID:                    9204e469-a61d-4cbd-a519-176da1b26689
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dgfvs                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m8s
	  kube-system                 coredns-6d4b75cb6d-rc8vb                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m8s
	  kube-system                 etcd-running-upgrade-908000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-apiserver-running-upgrade-908000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-controller-manager-running-upgrade-908000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-proxy-xrpz6                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-scheduler-running-upgrade-908000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m7s   kube-proxy       
	  Normal  Starting                 4m22s  kubelet          Starting kubelet.
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-908000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-908000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-908000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-908000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m9s   node-controller  Node running-upgrade-908000 event: Registered Node running-upgrade-908000 in Controller
	
	
	==> dmesg <==
	[  +1.920953] systemd-fstab-generator[879]: Ignoring "noauto" for root device
	[  +0.084999] systemd-fstab-generator[890]: Ignoring "noauto" for root device
	[  +0.080761] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +1.136249] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.092016] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.079081] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[  +3.108198] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[  +8.641778] systemd-fstab-generator[1923]: Ignoring "noauto" for root device
	[Jul 3 04:31] systemd-fstab-generator[2202]: Ignoring "noauto" for root device
	[  +0.143893] systemd-fstab-generator[2234]: Ignoring "noauto" for root device
	[  +0.093322] systemd-fstab-generator[2245]: Ignoring "noauto" for root device
	[  +0.084576] systemd-fstab-generator[2258]: Ignoring "noauto" for root device
	[ +13.418786] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.214940] systemd-fstab-generator[3007]: Ignoring "noauto" for root device
	[  +0.073905] systemd-fstab-generator[3021]: Ignoring "noauto" for root device
	[  +0.084305] systemd-fstab-generator[3032]: Ignoring "noauto" for root device
	[  +0.091844] systemd-fstab-generator[3046]: Ignoring "noauto" for root device
	[  +2.405813] systemd-fstab-generator[3197]: Ignoring "noauto" for root device
	[  +3.469792] systemd-fstab-generator[3591]: Ignoring "noauto" for root device
	[  +1.429647] systemd-fstab-generator[3890]: Ignoring "noauto" for root device
	[ +20.643746] kauditd_printk_skb: 68 callbacks suppressed
	[Jul 3 04:35] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.447242] systemd-fstab-generator[10090]: Ignoring "noauto" for root device
	[  +5.675804] systemd-fstab-generator[10700]: Ignoring "noauto" for root device
	[  +0.457178] systemd-fstab-generator[10832]: Ignoring "noauto" for root device
	
	
	==> etcd [ad8bff9543c0] <==
	{"level":"info","ts":"2024-07-03T04:35:33.389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-03T04:35:33.389Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-03T04:35:33.405Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-03T04:35:33.405Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-03T04:35:33.405Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-03T04:35:33.405Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-03T04:35:33.405Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-03T04:35:33.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-03T04:35:33.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-03T04:35:33.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-03T04:35:33.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-03T04:35:33.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-03T04:35:33.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-03T04:35:33.685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-03T04:35:33.685Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-03T04:35:33.688Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-03T04:35:33.689Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-03T04:35:33.689Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-03T04:35:33.689Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-908000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-03T04:35:33.689Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T04:35:33.689Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-03T04:35:33.689Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T04:35:33.690Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-03T04:35:33.695Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-03T04:35:33.695Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 04:39:59 up 9 min,  0 users,  load average: 0.08, 0.30, 0.20
	Linux running-upgrade-908000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [7a9072fd4040] <==
	I0703 04:35:35.081972       1 controller.go:611] quota admission added evaluator for: namespaces
	I0703 04:35:35.108368       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0703 04:35:35.117061       1 cache.go:39] Caches are synced for autoregister controller
	I0703 04:35:35.117064       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0703 04:35:35.117188       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0703 04:35:35.117213       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0703 04:35:35.118803       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0703 04:35:35.843146       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0703 04:35:36.022645       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0703 04:35:36.027586       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0703 04:35:36.027612       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0703 04:35:36.164960       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0703 04:35:36.174206       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0703 04:35:36.287926       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0703 04:35:36.290366       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0703 04:35:36.290767       1 controller.go:611] quota admission added evaluator for: endpoints
	I0703 04:35:36.292209       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0703 04:35:37.148374       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0703 04:35:37.916413       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0703 04:35:37.920291       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0703 04:35:37.926632       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0703 04:35:37.986222       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0703 04:35:50.401572       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0703 04:35:50.902062       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0703 04:35:51.521933       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [a8523ddcb6e3] <==
	I0703 04:35:50.398556       1 event.go:294] "Event occurred" object="running-upgrade-908000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-908000 event: Registered Node running-upgrade-908000 in Controller"
	I0703 04:35:50.398592       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0703 04:35:50.400143       1 shared_informer.go:262] Caches are synced for TTL
	I0703 04:35:50.400284       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0703 04:35:50.405996       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xrpz6"
	I0703 04:35:50.412789       1 shared_informer.go:262] Caches are synced for node
	I0703 04:35:50.412887       1 range_allocator.go:173] Starting range CIDR allocator
	I0703 04:35:50.412906       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0703 04:35:50.412924       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0703 04:35:50.417661       1 range_allocator.go:374] Set node running-upgrade-908000 PodCIDR to [10.244.0.0/24]
	I0703 04:35:50.418720       1 shared_informer.go:262] Caches are synced for expand
	I0703 04:35:50.498833       1 shared_informer.go:262] Caches are synced for stateful set
	I0703 04:35:50.498914       1 shared_informer.go:262] Caches are synced for deployment
	I0703 04:35:50.537889       1 shared_informer.go:262] Caches are synced for resource quota
	I0703 04:35:50.558585       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0703 04:35:50.627073       1 shared_informer.go:262] Caches are synced for resource quota
	I0703 04:35:50.631235       1 shared_informer.go:262] Caches are synced for disruption
	I0703 04:35:50.631261       1 disruption.go:371] Sending events to api server.
	I0703 04:35:50.651275       1 shared_informer.go:262] Caches are synced for attach detach
	I0703 04:35:50.903354       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0703 04:35:51.003558       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-rc8vb"
	I0703 04:35:51.012617       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dgfvs"
	I0703 04:35:51.050803       1 shared_informer.go:262] Caches are synced for garbage collector
	I0703 04:35:51.057024       1 shared_informer.go:262] Caches are synced for garbage collector
	I0703 04:35:51.057058       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [a3a629c31cfb] <==
	I0703 04:35:51.501701       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0703 04:35:51.501730       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0703 04:35:51.501742       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0703 04:35:51.520137       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0703 04:35:51.520147       1 server_others.go:206] "Using iptables Proxier"
	I0703 04:35:51.520160       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0703 04:35:51.520251       1 server.go:661] "Version info" version="v1.24.1"
	I0703 04:35:51.520254       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 04:35:51.520703       1 config.go:317] "Starting service config controller"
	I0703 04:35:51.520723       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0703 04:35:51.520731       1 config.go:226] "Starting endpoint slice config controller"
	I0703 04:35:51.520733       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0703 04:35:51.520934       1 config.go:444] "Starting node config controller"
	I0703 04:35:51.520936       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0703 04:35:51.620761       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0703 04:35:51.620774       1 shared_informer.go:262] Caches are synced for service config
	I0703 04:35:51.621037       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [722b8e64335f] <==
	W0703 04:35:35.080490       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0703 04:35:35.080521       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0703 04:35:35.080698       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0703 04:35:35.080743       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0703 04:35:35.082102       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0703 04:35:35.082159       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0703 04:35:35.933671       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0703 04:35:35.933757       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0703 04:35:35.939751       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0703 04:35:35.939963       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0703 04:35:36.061493       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0703 04:35:36.061663       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0703 04:35:36.072287       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0703 04:35:36.072370       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0703 04:35:36.077828       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0703 04:35:36.077862       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0703 04:35:36.086329       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0703 04:35:36.086396       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0703 04:35:36.119271       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0703 04:35:36.119371       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0703 04:35:36.124819       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0703 04:35:36.124866       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0703 04:35:36.156580       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0703 04:35:36.156674       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0703 04:35:39.077539       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-07-03 04:30:33 UTC, ends at Wed 2024-07-03 04:39:59 UTC. --
	Jul 03 04:35:40 running-upgrade-908000 kubelet[10706]: E0703 04:35:40.148748   10706 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-908000\" already exists" pod="kube-system/etcd-running-upgrade-908000"
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: I0703 04:35:50.407531   10706 topology_manager.go:200] "Topology Admit Handler"
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: I0703 04:35:50.410103   10706 topology_manager.go:200] "Topology Admit Handler"
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: I0703 04:35:50.477751   10706 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: I0703 04:35:50.478277   10706 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: I0703 04:35:50.578378   10706 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz6jd\" (UniqueName: \"kubernetes.io/projected/a121259c-24ef-4d97-85b8-0aa557e035a2-kube-api-access-mz6jd\") pod \"storage-provisioner\" (UID: \"a121259c-24ef-4d97-85b8-0aa557e035a2\") " pod="kube-system/storage-provisioner"
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: I0703 04:35:50.578408   10706 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7gfz\" (UniqueName: \"kubernetes.io/projected/b043e242-4436-484d-a5ec-fb5d3db4f435-kube-api-access-t7gfz\") pod \"kube-proxy-xrpz6\" (UID: \"b043e242-4436-484d-a5ec-fb5d3db4f435\") " pod="kube-system/kube-proxy-xrpz6"
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: I0703 04:35:50.578437   10706 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a121259c-24ef-4d97-85b8-0aa557e035a2-tmp\") pod \"storage-provisioner\" (UID: \"a121259c-24ef-4d97-85b8-0aa557e035a2\") " pod="kube-system/storage-provisioner"
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: I0703 04:35:50.578502   10706 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b043e242-4436-484d-a5ec-fb5d3db4f435-xtables-lock\") pod \"kube-proxy-xrpz6\" (UID: \"b043e242-4436-484d-a5ec-fb5d3db4f435\") " pod="kube-system/kube-proxy-xrpz6"
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: I0703 04:35:50.578517   10706 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b043e242-4436-484d-a5ec-fb5d3db4f435-kube-proxy\") pod \"kube-proxy-xrpz6\" (UID: \"b043e242-4436-484d-a5ec-fb5d3db4f435\") " pod="kube-system/kube-proxy-xrpz6"
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: I0703 04:35:50.578531   10706 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b043e242-4436-484d-a5ec-fb5d3db4f435-lib-modules\") pod \"kube-proxy-xrpz6\" (UID: \"b043e242-4436-484d-a5ec-fb5d3db4f435\") " pod="kube-system/kube-proxy-xrpz6"
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: E0703 04:35:50.683092   10706 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: E0703 04:35:50.683119   10706 projected.go:192] Error preparing data for projected volume kube-api-access-t7gfz for pod kube-system/kube-proxy-xrpz6: configmap "kube-root-ca.crt" not found
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: E0703 04:35:50.683185   10706 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b043e242-4436-484d-a5ec-fb5d3db4f435-kube-api-access-t7gfz podName:b043e242-4436-484d-a5ec-fb5d3db4f435 nodeName:}" failed. No retries permitted until 2024-07-03 04:35:51.183164808 +0000 UTC m=+13.278059530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t7gfz" (UniqueName: "kubernetes.io/projected/b043e242-4436-484d-a5ec-fb5d3db4f435-kube-api-access-t7gfz") pod "kube-proxy-xrpz6" (UID: "b043e242-4436-484d-a5ec-fb5d3db4f435") : configmap "kube-root-ca.crt" not found
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: E0703 04:35:50.689976   10706 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: E0703 04:35:50.689993   10706 projected.go:192] Error preparing data for projected volume kube-api-access-mz6jd for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 03 04:35:50 running-upgrade-908000 kubelet[10706]: E0703 04:35:50.690021   10706 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/a121259c-24ef-4d97-85b8-0aa557e035a2-kube-api-access-mz6jd podName:a121259c-24ef-4d97-85b8-0aa557e035a2 nodeName:}" failed. No retries permitted until 2024-07-03 04:35:51.19001113 +0000 UTC m=+13.284905851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mz6jd" (UniqueName: "kubernetes.io/projected/a121259c-24ef-4d97-85b8-0aa557e035a2-kube-api-access-mz6jd") pod "storage-provisioner" (UID: "a121259c-24ef-4d97-85b8-0aa557e035a2") : configmap "kube-root-ca.crt" not found
	Jul 03 04:35:51 running-upgrade-908000 kubelet[10706]: I0703 04:35:51.005976   10706 topology_manager.go:200] "Topology Admit Handler"
	Jul 03 04:35:51 running-upgrade-908000 kubelet[10706]: I0703 04:35:51.019492   10706 topology_manager.go:200] "Topology Admit Handler"
	Jul 03 04:35:51 running-upgrade-908000 kubelet[10706]: I0703 04:35:51.187161   10706 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/783ecfcc-af5a-418c-95e9-8b08364c5f68-config-volume\") pod \"coredns-6d4b75cb6d-rc8vb\" (UID: \"783ecfcc-af5a-418c-95e9-8b08364c5f68\") " pod="kube-system/coredns-6d4b75cb6d-rc8vb"
	Jul 03 04:35:51 running-upgrade-908000 kubelet[10706]: I0703 04:35:51.187195   10706 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8wmq\" (UniqueName: \"kubernetes.io/projected/783ecfcc-af5a-418c-95e9-8b08364c5f68-kube-api-access-q8wmq\") pod \"coredns-6d4b75cb6d-rc8vb\" (UID: \"783ecfcc-af5a-418c-95e9-8b08364c5f68\") " pod="kube-system/coredns-6d4b75cb6d-rc8vb"
	Jul 03 04:35:51 running-upgrade-908000 kubelet[10706]: I0703 04:35:51.187232   10706 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cb2e642-cb15-4598-ae34-b54d0720288e-config-volume\") pod \"coredns-6d4b75cb6d-dgfvs\" (UID: \"9cb2e642-cb15-4598-ae34-b54d0720288e\") " pod="kube-system/coredns-6d4b75cb6d-dgfvs"
	Jul 03 04:35:51 running-upgrade-908000 kubelet[10706]: I0703 04:35:51.187249   10706 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9khq\" (UniqueName: \"kubernetes.io/projected/9cb2e642-cb15-4598-ae34-b54d0720288e-kube-api-access-g9khq\") pod \"coredns-6d4b75cb6d-dgfvs\" (UID: \"9cb2e642-cb15-4598-ae34-b54d0720288e\") " pod="kube-system/coredns-6d4b75cb6d-dgfvs"
	Jul 03 04:39:29 running-upgrade-908000 kubelet[10706]: I0703 04:39:29.477151   10706 scope.go:110] "RemoveContainer" containerID="0033d4e81390f12ebf13cbc2e655f25af0c38ba5548f0196140c00cd62fddef5"
	Jul 03 04:39:39 running-upgrade-908000 kubelet[10706]: I0703 04:39:39.576675   10706 scope.go:110] "RemoveContainer" containerID="61261c440964278fff6a5c3ec1d42733108e62949ca74c5c7fc754800b2b380f"
	
	
	==> storage-provisioner [19f1810fd3bd] <==
	I0703 04:35:51.530119       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0703 04:35:51.533421       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0703 04:35:51.533438       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0703 04:35:51.536812       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0703 04:35:51.536986       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"19ec6913-b01d-456c-885b-dba636ce6e34", APIVersion:"v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-908000_c57ce652-e0a4-4964-8520-fe106df082f7 became leader
	I0703 04:35:51.537000       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-908000_c57ce652-e0a4-4964-8520-fe106df082f7!
	I0703 04:35:51.637549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-908000_c57ce652-e0a4-4964-8520-fe106df082f7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-908000 -n running-upgrade-908000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-908000 -n running-upgrade-908000: exit status 2 (15.717271375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-908000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-908000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-908000
--- FAIL: TestRunningBinaryUpgrade (606.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-521000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-521000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.752482792s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-521000" primary control-plane node in "kubernetes-upgrade-521000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-521000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:35:40.219529    8823 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:35:40.219678    8823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:35:40.219682    8823 out.go:304] Setting ErrFile to fd 2...
	I0702 21:35:40.219684    8823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:35:40.219817    8823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:35:40.220928    8823 out.go:298] Setting JSON to false
	I0702 21:35:40.237106    8823 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5709,"bootTime":1719975631,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:35:40.237178    8823 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:35:40.241629    8823 out.go:177] * [kubernetes-upgrade-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:35:40.248742    8823 notify.go:220] Checking for updates...
	I0702 21:35:40.252542    8823 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:35:40.255708    8823 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:35:40.260631    8823 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:35:40.264725    8823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:35:40.267599    8823 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:35:40.270684    8823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:35:40.273985    8823 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:35:40.274047    8823 config.go:182] Loaded profile config "running-upgrade-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:35:40.274106    8823 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:35:40.277599    8823 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:35:40.284626    8823 start.go:297] selected driver: qemu2
	I0702 21:35:40.284632    8823 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:35:40.284638    8823 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:35:40.286939    8823 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:35:40.288353    8823 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:35:40.291725    8823 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0702 21:35:40.291736    8823 cni.go:84] Creating CNI manager for ""
	I0702 21:35:40.291742    8823 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0702 21:35:40.291771    8823 start.go:340] cluster config:
	{Name:kubernetes-upgrade-521000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:35:40.295032    8823 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:35:40.302665    8823 out.go:177] * Starting "kubernetes-upgrade-521000" primary control-plane node in "kubernetes-upgrade-521000" cluster
	I0702 21:35:40.306585    8823 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0702 21:35:40.306599    8823 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0702 21:35:40.306609    8823 cache.go:56] Caching tarball of preloaded images
	I0702 21:35:40.306665    8823 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:35:40.306671    8823 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0702 21:35:40.306718    8823 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/kubernetes-upgrade-521000/config.json ...
	I0702 21:35:40.306728    8823 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/kubernetes-upgrade-521000/config.json: {Name:mkc9624c28f17afd8a834b956ca2b57da1ba0ec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:35:40.307036    8823 start.go:360] acquireMachinesLock for kubernetes-upgrade-521000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:35:40.307076    8823 start.go:364] duration metric: took 32µs to acquireMachinesLock for "kubernetes-upgrade-521000"
	I0702 21:35:40.307089    8823 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:35:40.307118    8823 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:35:40.315703    8823 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:35:40.330296    8823 start.go:159] libmachine.API.Create for "kubernetes-upgrade-521000" (driver="qemu2")
	I0702 21:35:40.330323    8823 client.go:168] LocalClient.Create starting
	I0702 21:35:40.330381    8823 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:35:40.330415    8823 main.go:141] libmachine: Decoding PEM data...
	I0702 21:35:40.330425    8823 main.go:141] libmachine: Parsing certificate...
	I0702 21:35:40.330465    8823 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:35:40.330488    8823 main.go:141] libmachine: Decoding PEM data...
	I0702 21:35:40.330493    8823 main.go:141] libmachine: Parsing certificate...
	I0702 21:35:40.330850    8823 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:35:40.457709    8823 main.go:141] libmachine: Creating SSH key...
	I0702 21:35:40.564658    8823 main.go:141] libmachine: Creating Disk image...
	I0702 21:35:40.564665    8823 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:35:40.564849    8823 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2
	I0702 21:35:40.574700    8823 main.go:141] libmachine: STDOUT: 
	I0702 21:35:40.574721    8823 main.go:141] libmachine: STDERR: 
	I0702 21:35:40.574776    8823 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2 +20000M
	I0702 21:35:40.583099    8823 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:35:40.583115    8823 main.go:141] libmachine: STDERR: 
	I0702 21:35:40.583155    8823 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2
	I0702 21:35:40.583162    8823 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:35:40.583196    8823 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:cd:b3:df:d7:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2
	I0702 21:35:40.584879    8823 main.go:141] libmachine: STDOUT: 
	I0702 21:35:40.584899    8823 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:35:40.584919    8823 client.go:171] duration metric: took 254.592542ms to LocalClient.Create
	I0702 21:35:42.587112    8823 start.go:128] duration metric: took 2.279970083s to createHost
	I0702 21:35:42.587184    8823 start.go:83] releasing machines lock for "kubernetes-upgrade-521000", held for 2.280107417s
	W0702 21:35:42.587317    8823 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:35:42.599010    8823 out.go:177] * Deleting "kubernetes-upgrade-521000" in qemu2 ...
	W0702 21:35:42.618536    8823 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:35:42.618565    8823 start.go:728] Will try again in 5 seconds ...
	I0702 21:35:47.620749    8823 start.go:360] acquireMachinesLock for kubernetes-upgrade-521000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:35:47.621218    8823 start.go:364] duration metric: took 375.917µs to acquireMachinesLock for "kubernetes-upgrade-521000"
	I0702 21:35:47.621339    8823 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:35:47.621594    8823 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:35:47.631153    8823 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:35:47.680024    8823 start.go:159] libmachine.API.Create for "kubernetes-upgrade-521000" (driver="qemu2")
	I0702 21:35:47.680076    8823 client.go:168] LocalClient.Create starting
	I0702 21:35:47.680201    8823 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:35:47.680282    8823 main.go:141] libmachine: Decoding PEM data...
	I0702 21:35:47.680300    8823 main.go:141] libmachine: Parsing certificate...
	I0702 21:35:47.680362    8823 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:35:47.680407    8823 main.go:141] libmachine: Decoding PEM data...
	I0702 21:35:47.680420    8823 main.go:141] libmachine: Parsing certificate...
	I0702 21:35:47.680955    8823 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:35:47.817297    8823 main.go:141] libmachine: Creating SSH key...
	I0702 21:35:47.881480    8823 main.go:141] libmachine: Creating Disk image...
	I0702 21:35:47.881489    8823 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:35:47.881662    8823 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2
	I0702 21:35:47.891023    8823 main.go:141] libmachine: STDOUT: 
	I0702 21:35:47.891039    8823 main.go:141] libmachine: STDERR: 
	I0702 21:35:47.891080    8823 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2 +20000M
	I0702 21:35:47.899040    8823 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:35:47.899055    8823 main.go:141] libmachine: STDERR: 
	I0702 21:35:47.899066    8823 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2
	I0702 21:35:47.899076    8823 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:35:47.899105    8823 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:00:50:bd:0a:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2
	I0702 21:35:47.900888    8823 main.go:141] libmachine: STDOUT: 
	I0702 21:35:47.900902    8823 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:35:47.900915    8823 client.go:171] duration metric: took 220.835875ms to LocalClient.Create
	I0702 21:35:49.903076    8823 start.go:128] duration metric: took 2.281470958s to createHost
	I0702 21:35:49.903155    8823 start.go:83] releasing machines lock for "kubernetes-upgrade-521000", held for 2.281939167s
	W0702 21:35:49.903494    8823 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:35:49.914104    8823 out.go:177] 
	W0702 21:35:49.918280    8823 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:35:49.918329    8823 out.go:239] * 
	* 
	W0702 21:35:49.920736    8823 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:35:49.929920    8823 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-521000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-521000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-521000: (1.950540209s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-521000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-521000 status --format={{.Host}}: exit status 7 (32.156208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-521000 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-521000 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.186304917s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-521000" primary control-plane node in "kubernetes-upgrade-521000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-521000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-521000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:35:51.956875    8853 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:35:51.957000    8853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:35:51.957005    8853 out.go:304] Setting ErrFile to fd 2...
	I0702 21:35:51.957008    8853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:35:51.957144    8853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:35:51.958184    8853 out.go:298] Setting JSON to false
	I0702 21:35:51.975549    8853 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5720,"bootTime":1719975631,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:35:51.975654    8853 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:35:51.979802    8853 out.go:177] * [kubernetes-upgrade-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:35:51.987833    8853 notify.go:220] Checking for updates...
	I0702 21:35:51.993740    8853 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:35:52.001635    8853 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:35:52.004712    8853 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:35:52.007764    8853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:35:52.010643    8853 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:35:52.013725    8853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:35:52.017056    8853 config.go:182] Loaded profile config "kubernetes-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0702 21:35:52.017310    8853 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:35:52.021716    8853 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:35:52.028756    8853 start.go:297] selected driver: qemu2
	I0702 21:35:52.028769    8853 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:35:52.028830    8853 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:35:52.031313    8853 cni.go:84] Creating CNI manager for ""
	I0702 21:35:52.031331    8853 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:35:52.031365    8853 start.go:340] cluster config:
	{Name:kubernetes-upgrade-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubernetes-upgrade-521000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:35:52.034966    8853 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:35:52.042702    8853 out.go:177] * Starting "kubernetes-upgrade-521000" primary control-plane node in "kubernetes-upgrade-521000" cluster
	I0702 21:35:52.046698    8853 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:35:52.046722    8853 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:35:52.046730    8853 cache.go:56] Caching tarball of preloaded images
	I0702 21:35:52.046805    8853 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:35:52.046811    8853 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:35:52.046865    8853 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/kubernetes-upgrade-521000/config.json ...
	I0702 21:35:52.047197    8853 start.go:360] acquireMachinesLock for kubernetes-upgrade-521000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:35:52.047231    8853 start.go:364] duration metric: took 26.791µs to acquireMachinesLock for "kubernetes-upgrade-521000"
	I0702 21:35:52.047241    8853 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:35:52.047246    8853 fix.go:54] fixHost starting: 
	I0702 21:35:52.047361    8853 fix.go:112] recreateIfNeeded on kubernetes-upgrade-521000: state=Stopped err=<nil>
	W0702 21:35:52.047369    8853 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:35:52.051772    8853 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-521000" ...
	I0702 21:35:52.058738    8853 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:00:50:bd:0a:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2
	I0702 21:35:52.060820    8853 main.go:141] libmachine: STDOUT: 
	I0702 21:35:52.060839    8853 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:35:52.060871    8853 fix.go:56] duration metric: took 13.624584ms for fixHost
	I0702 21:35:52.060875    8853 start.go:83] releasing machines lock for "kubernetes-upgrade-521000", held for 13.640042ms
	W0702 21:35:52.060884    8853 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:35:52.060925    8853 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:35:52.060930    8853 start.go:728] Will try again in 5 seconds ...
	I0702 21:35:57.062708    8853 start.go:360] acquireMachinesLock for kubernetes-upgrade-521000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:35:57.063130    8853 start.go:364] duration metric: took 331.875µs to acquireMachinesLock for "kubernetes-upgrade-521000"
	I0702 21:35:57.063285    8853 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:35:57.063301    8853 fix.go:54] fixHost starting: 
	I0702 21:35:57.063851    8853 fix.go:112] recreateIfNeeded on kubernetes-upgrade-521000: state=Stopped err=<nil>
	W0702 21:35:57.063868    8853 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:35:57.068242    8853 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-521000" ...
	I0702 21:35:57.076359    8853 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:00:50:bd:0a:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubernetes-upgrade-521000/disk.qcow2
	I0702 21:35:57.082886    8853 main.go:141] libmachine: STDOUT: 
	I0702 21:35:57.082932    8853 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:35:57.082983    8853 fix.go:56] duration metric: took 19.684084ms for fixHost
	I0702 21:35:57.082994    8853 start.go:83] releasing machines lock for "kubernetes-upgrade-521000", held for 19.846708ms
	W0702 21:35:57.083137    8853 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-521000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-521000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:35:57.090312    8853 out.go:177] 
	W0702 21:35:57.093187    8853 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:35:57.093208    8853 out.go:239] * 
	* 
	W0702 21:35:57.094942    8853 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:35:57.102189    8853 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-521000 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-521000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-521000 version --output=json: exit status 1 (53.246083ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-521000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-02 21:35:57.168699 -0700 PDT m=+1045.882098293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-521000 -n kubernetes-upgrade-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-521000 -n kubernetes-upgrade-521000: exit status 7 (32.308375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-521000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-521000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-521000
--- FAIL: TestKubernetesUpgrade (17.08s)

                                                
                                    
x
+
TestPause/serial/Start (26.3s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-818000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-818000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (26.260426917s)

                                                
                                                
-- stdout --
	* [pause-818000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-818000" primary control-plane node in "pause-818000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-818000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-818000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-818000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-818000 -n pause-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-818000 -n pause-818000: exit status 7 (39.341125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-818000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (26.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-934000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-934000 --driver=qemu2 : exit status 80 (9.751521833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-934000" primary control-plane node in "NoKubernetes-934000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-934000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-934000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-934000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000: exit status 7 (58.589042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --driver=qemu2 : exit status 80 (5.2445635s)

                                                
                                                
-- stdout --
	* [NoKubernetes-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-934000
	* Restarting existing qemu2 VM for "NoKubernetes-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-934000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000: exit status 7 (33.004417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --driver=qemu2 : exit status 80 (5.244972208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-934000
	* Restarting existing qemu2 VM for "NoKubernetes-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-934000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000: exit status 7 (35.288667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-934000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-934000 --driver=qemu2 : exit status 80 (5.234781333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-934000
	* Restarting existing qemu2 VM for "NoKubernetes-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-934000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-934000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-934000 -n NoKubernetes-934000: exit status 7 (31.350125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.27s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.57s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19184
- KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2015540544/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.57s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.4s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19184
- KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current103506871/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.648617155 start -p stopped-upgrade-896000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.648617155 start -p stopped-upgrade-896000 --memory=2200 --vm-driver=qemu2 : (39.146071709s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.648617155 -p stopped-upgrade-896000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.648617155 -p stopped-upgrade-896000 stop: (12.116132583s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-896000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-896000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.7714955s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-896000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-896000" primary control-plane node in "stopped-upgrade-896000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-896000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:36:49.594256    8914 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:36:49.594451    8914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:36:49.594456    8914 out.go:304] Setting ErrFile to fd 2...
	I0702 21:36:49.594459    8914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:36:49.594600    8914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:36:49.595759    8914 out.go:298] Setting JSON to false
	I0702 21:36:49.615010    8914 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5778,"bootTime":1719975631,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:36:49.615078    8914 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:36:49.619244    8914 out.go:177] * [stopped-upgrade-896000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:36:49.627121    8914 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:36:49.627173    8914 notify.go:220] Checking for updates...
	I0702 21:36:49.634105    8914 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:36:49.637163    8914 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:36:49.640193    8914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:36:49.643085    8914 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:36:49.646147    8914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:36:49.649495    8914 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:36:49.653036    8914 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0702 21:36:49.657114    8914 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:36:49.661096    8914 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:36:49.669034    8914 start.go:297] selected driver: qemu2
	I0702 21:36:49.669041    8914 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0702 21:36:49.669085    8914 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:36:49.671612    8914 cni.go:84] Creating CNI manager for ""
	I0702 21:36:49.671629    8914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:36:49.671652    8914 start.go:340] cluster config:
	{Name:stopped-upgrade-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0702 21:36:49.671700    8914 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:36:49.680098    8914 out.go:177] * Starting "stopped-upgrade-896000" primary control-plane node in "stopped-upgrade-896000" cluster
	I0702 21:36:49.684116    8914 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0702 21:36:49.684131    8914 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0702 21:36:49.684138    8914 cache.go:56] Caching tarball of preloaded images
	I0702 21:36:49.684198    8914 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:36:49.684203    8914 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0702 21:36:49.684248    8914 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/config.json ...
	I0702 21:36:49.684564    8914 start.go:360] acquireMachinesLock for stopped-upgrade-896000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:36:49.684596    8914 start.go:364] duration metric: took 26.166µs to acquireMachinesLock for "stopped-upgrade-896000"
	I0702 21:36:49.684606    8914 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:36:49.684611    8914 fix.go:54] fixHost starting: 
	I0702 21:36:49.684718    8914 fix.go:112] recreateIfNeeded on stopped-upgrade-896000: state=Stopped err=<nil>
	W0702 21:36:49.684725    8914 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:36:49.688116    8914 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-896000" ...
	I0702 21:36:49.696149    8914 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51457-:22,hostfwd=tcp::51458-:2376,hostname=stopped-upgrade-896000 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/disk.qcow2
	I0702 21:36:49.741454    8914 main.go:141] libmachine: STDOUT: 
	I0702 21:36:49.741485    8914 main.go:141] libmachine: STDERR: 
	I0702 21:36:49.741496    8914 main.go:141] libmachine: Waiting for VM to start (ssh -p 51457 docker@127.0.0.1)...
	I0702 21:37:09.365629    8914 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/config.json ...
	I0702 21:37:09.365840    8914 machine.go:94] provisionDockerMachine start ...
	I0702 21:37:09.365891    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:09.366037    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:09.366044    8914 main.go:141] libmachine: About to run SSH command:
	hostname
	I0702 21:37:09.416432    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0702 21:37:09.416448    8914 buildroot.go:166] provisioning hostname "stopped-upgrade-896000"
	I0702 21:37:09.416503    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:09.416633    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:09.416638    8914 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-896000 && echo "stopped-upgrade-896000" | sudo tee /etc/hostname
	I0702 21:37:09.468771    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-896000
	
	I0702 21:37:09.468820    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:09.468934    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:09.468943    8914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-896000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-896000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-896000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0702 21:37:09.522928    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0702 21:37:09.522937    8914 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19184-6175/.minikube CaCertPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19184-6175/.minikube}
	I0702 21:37:09.522947    8914 buildroot.go:174] setting up certificates
	I0702 21:37:09.522955    8914 provision.go:84] configureAuth start
	I0702 21:37:09.522959    8914 provision.go:143] copyHostCerts
	I0702 21:37:09.523039    8914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19184-6175/.minikube/key.pem, removing ...
	I0702 21:37:09.523045    8914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19184-6175/.minikube/key.pem
	I0702 21:37:09.523571    8914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19184-6175/.minikube/key.pem (1675 bytes)
	I0702 21:37:09.523779    8914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.pem, removing ...
	I0702 21:37:09.523782    8914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.pem
	I0702 21:37:09.523833    8914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.pem (1078 bytes)
	I0702 21:37:09.523938    8914 exec_runner.go:144] found /Users/jenkins/minikube-integration/19184-6175/.minikube/cert.pem, removing ...
	I0702 21:37:09.523942    8914 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19184-6175/.minikube/cert.pem
	I0702 21:37:09.523987    8914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19184-6175/.minikube/cert.pem (1123 bytes)
	I0702 21:37:09.524076    8914 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-896000 san=[127.0.0.1 localhost minikube stopped-upgrade-896000]
	I0702 21:37:09.602981    8914 provision.go:177] copyRemoteCerts
	I0702 21:37:09.603024    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0702 21:37:09.603032    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/id_rsa Username:docker}
	I0702 21:37:09.631131    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0702 21:37:09.638408    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0702 21:37:09.646965    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0702 21:37:09.653969    8914 provision.go:87] duration metric: took 131.006541ms to configureAuth
	I0702 21:37:09.653978    8914 buildroot.go:189] setting minikube options for container-runtime
	I0702 21:37:09.654098    8914 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:37:09.654134    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:09.654220    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:09.654225    8914 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0702 21:37:09.704826    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0702 21:37:09.704834    8914 buildroot.go:70] root file system type: tmpfs
	I0702 21:37:09.704884    8914 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0702 21:37:09.704931    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:09.705036    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:09.705068    8914 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0702 21:37:09.760269    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0702 21:37:09.760315    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:09.760425    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:09.760455    8914 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0702 21:37:10.147078    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0702 21:37:10.147090    8914 machine.go:97] duration metric: took 781.260584ms to provisionDockerMachine
	I0702 21:37:10.147097    8914 start.go:293] postStartSetup for "stopped-upgrade-896000" (driver="qemu2")
	I0702 21:37:10.147104    8914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0702 21:37:10.147167    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0702 21:37:10.147178    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/id_rsa Username:docker}
	I0702 21:37:10.173380    8914 ssh_runner.go:195] Run: cat /etc/os-release
	I0702 21:37:10.175019    8914 info.go:137] Remote host: Buildroot 2021.02.12
	I0702 21:37:10.175028    8914 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19184-6175/.minikube/addons for local assets ...
	I0702 21:37:10.175124    8914 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19184-6175/.minikube/files for local assets ...
	I0702 21:37:10.175262    8914 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19184-6175/.minikube/files/etc/ssl/certs/66692.pem -> 66692.pem in /etc/ssl/certs
	I0702 21:37:10.175405    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0702 21:37:10.178507    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/files/etc/ssl/certs/66692.pem --> /etc/ssl/certs/66692.pem (1708 bytes)
	I0702 21:37:10.185302    8914 start.go:296] duration metric: took 38.1995ms for postStartSetup
	I0702 21:37:10.185316    8914 fix.go:56] duration metric: took 20.501106125s for fixHost
	I0702 21:37:10.185359    8914 main.go:141] libmachine: Using SSH client type: native
	I0702 21:37:10.185474    8914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a928e0] 0x100a95140 <nil>  [] 0s} localhost 51457 <nil> <nil>}
	I0702 21:37:10.185481    8914 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0702 21:37:10.235705    8914 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719981430.687783629
	
	I0702 21:37:10.235714    8914 fix.go:216] guest clock: 1719981430.687783629
	I0702 21:37:10.235718    8914 fix.go:229] Guest: 2024-07-02 21:37:10.687783629 -0700 PDT Remote: 2024-07-02 21:37:10.185317 -0700 PDT m=+20.621860376 (delta=502.466629ms)
	I0702 21:37:10.235736    8914 fix.go:200] guest clock delta is within tolerance: 502.466629ms
	I0702 21:37:10.235739    8914 start.go:83] releasing machines lock for "stopped-upgrade-896000", held for 20.55153875s
	I0702 21:37:10.235792    8914 ssh_runner.go:195] Run: cat /version.json
	I0702 21:37:10.235801    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/id_rsa Username:docker}
	I0702 21:37:10.235815    8914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0702 21:37:10.235836    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/id_rsa Username:docker}
	W0702 21:37:10.236300    8914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51581->127.0.0.1:51457: write: broken pipe
	I0702 21:37:10.236319    8914 retry.go:31] will retry after 165.268788ms: ssh: handshake failed: write tcp 127.0.0.1:51581->127.0.0.1:51457: write: broken pipe
	W0702 21:37:10.261369    8914 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0702 21:37:10.261414    8914 ssh_runner.go:195] Run: systemctl --version
	I0702 21:37:10.263086    8914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0702 21:37:10.264535    8914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0702 21:37:10.264562    8914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0702 21:37:10.267558    8914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0702 21:37:10.272269    8914 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0702 21:37:10.272283    8914 start.go:494] detecting cgroup driver to use...
	I0702 21:37:10.272360    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0702 21:37:10.279090    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0702 21:37:10.282518    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0702 21:37:10.285399    8914 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0702 21:37:10.285425    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0702 21:37:10.288281    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0702 21:37:10.291620    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0702 21:37:10.294873    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0702 21:37:10.298178    8914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0702 21:37:10.300943    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0702 21:37:10.303883    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0702 21:37:10.307125    8914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0702 21:37:10.310423    8914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0702 21:37:10.312883    8914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0702 21:37:10.315816    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:37:10.403266    8914 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0702 21:37:10.409548    8914 start.go:494] detecting cgroup driver to use...
	I0702 21:37:10.409612    8914 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0702 21:37:10.418217    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0702 21:37:10.423827    8914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0702 21:37:10.435035    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0702 21:37:10.476933    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0702 21:37:10.481901    8914 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0702 21:37:10.540972    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0702 21:37:10.546806    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0702 21:37:10.552585    8914 ssh_runner.go:195] Run: which cri-dockerd
	I0702 21:37:10.554035    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0702 21:37:10.556795    8914 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0702 21:37:10.561728    8914 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0702 21:37:10.637621    8914 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0702 21:37:10.716401    8914 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0702 21:37:10.716466    8914 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0702 21:37:10.721741    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:37:10.790192    8914 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0702 21:37:11.954857    8914 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.16466875s)
	I0702 21:37:11.954931    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0702 21:37:11.959787    8914 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0702 21:37:11.966083    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0702 21:37:11.970325    8914 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0702 21:37:12.037692    8914 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0702 21:37:12.113917    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:37:12.192822    8914 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0702 21:37:12.198557    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0702 21:37:12.202685    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:37:12.265813    8914 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0702 21:37:12.305004    8914 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0702 21:37:12.305102    8914 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0702 21:37:12.307098    8914 start.go:562] Will wait 60s for crictl version
	I0702 21:37:12.307153    8914 ssh_runner.go:195] Run: which crictl
	I0702 21:37:12.308963    8914 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0702 21:37:12.323739    8914 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0702 21:37:12.323804    8914 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0702 21:37:12.340501    8914 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0702 21:37:12.360325    8914 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0702 21:37:12.360447    8914 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0702 21:37:12.361669    8914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0702 21:37:12.365029    8914 kubeadm.go:877] updating cluster {Name:stopped-upgrade-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0702 21:37:12.365076    8914 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0702 21:37:12.365129    8914 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0702 21:37:12.375256    8914 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0702 21:37:12.375266    8914 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0702 21:37:12.375308    8914 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0702 21:37:12.378757    8914 ssh_runner.go:195] Run: which lz4
	I0702 21:37:12.380000    8914 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0702 21:37:12.381252    8914 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0702 21:37:12.381264    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0702 21:37:13.319555    8914 docker.go:649] duration metric: took 939.60225ms to copy over tarball
	I0702 21:37:13.319614    8914 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0702 21:37:14.471990    8914 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.152384583s)
	I0702 21:37:14.472004    8914 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0702 21:37:14.487922    8914 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0702 21:37:14.491636    8914 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0702 21:37:14.497009    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:37:14.574165    8914 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0702 21:37:16.078019    8914 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.503866708s)
	I0702 21:37:16.078127    8914 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0702 21:37:16.088962    8914 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0702 21:37:16.088972    8914 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0702 21:37:16.088976    8914 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0702 21:37:16.094550    8914 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:37:16.096862    8914 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:37:16.098529    8914 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:37:16.098620    8914 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:37:16.100818    8914 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:37:16.100882    8914 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:37:16.102211    8914 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:37:16.102230    8914 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:37:16.103415    8914 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0702 21:37:16.103421    8914 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:37:16.104650    8914 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:37:16.104660    8914 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0702 21:37:16.105660    8914 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0702 21:37:16.105867    8914 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:37:16.106876    8914 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0702 21:37:16.107530    8914 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:37:16.558714    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:37:16.570019    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:37:16.571010    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:37:16.572581    8914 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0702 21:37:16.572606    8914 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:37:16.572643    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0702 21:37:16.577558    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:37:16.589605    8914 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0702 21:37:16.589627    8914 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:37:16.589682    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0702 21:37:16.591837    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0702 21:37:16.593154    8914 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0702 21:37:16.593170    8914 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:37:16.593201    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0702 21:37:16.594352    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0702 21:37:16.597568    8914 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0702 21:37:16.597586    8914 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:37:16.597633    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0702 21:37:16.605926    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0702 21:37:16.613428    8914 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0702 21:37:16.613446    8914 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0702 21:37:16.613496    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0702 21:37:16.614250    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0702 21:37:16.630176    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0702 21:37:16.630204    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0702 21:37:16.630319    8914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0702 21:37:16.631495    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0702 21:37:16.632808    8914 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0702 21:37:16.632827    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0702 21:37:16.641665    8914 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0702 21:37:16.641685    8914 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0702 21:37:16.641737    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0702 21:37:16.647138    8914 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0702 21:37:16.647270    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:37:16.649226    8914 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0702 21:37:16.649236    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0702 21:37:16.662509    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0702 21:37:16.662611    8914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0702 21:37:16.677566    8914 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0702 21:37:16.677587    8914 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:37:16.677643    8914 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0702 21:37:16.694247    8914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0702 21:37:16.694273    8914 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0702 21:37:16.694300    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0702 21:37:16.694311    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0702 21:37:16.694409    8914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0702 21:37:16.701336    8914 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0702 21:37:16.701415    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0702 21:37:16.713209    8914 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0702 21:37:16.713313    8914 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:37:16.763373    8914 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0702 21:37:16.763387    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0702 21:37:16.787395    8914 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0702 21:37:16.787481    8914 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:37:16.787561    8914 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:37:16.849550    8914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0702 21:37:16.853152    8914 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0702 21:37:16.853276    8914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0702 21:37:16.859942    8914 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0702 21:37:16.859974    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0702 21:37:16.935959    8914 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0702 21:37:16.935973    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0702 21:37:17.288879    8914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0702 21:37:17.288903    8914 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0702 21:37:17.288908    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0702 21:37:17.425392    8914 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0702 21:37:17.425437    8914 cache_images.go:92] duration metric: took 1.336479375s to LoadCachedImages
	W0702 21:37:17.425487    8914 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0702 21:37:17.425496    8914 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0702 21:37:17.425552    8914 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-896000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0702 21:37:17.425620    8914 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0702 21:37:17.438951    8914 cni.go:84] Creating CNI manager for ""
	I0702 21:37:17.438962    8914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:37:17.438967    8914 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0702 21:37:17.438976    8914 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-896000 NodeName:stopped-upgrade-896000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0702 21:37:17.439051    8914 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-896000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0702 21:37:17.439101    8914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0702 21:37:17.442147    8914 binaries.go:44] Found k8s binaries, skipping transfer
	I0702 21:37:17.442176    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0702 21:37:17.444767    8914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0702 21:37:17.449591    8914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0702 21:37:17.454689    8914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0702 21:37:17.460432    8914 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0702 21:37:17.461702    8914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0702 21:37:17.464990    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:37:17.546611    8914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0702 21:37:17.553622    8914 certs.go:68] Setting up /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000 for IP: 10.0.2.15
	I0702 21:37:17.553638    8914 certs.go:194] generating shared ca certs ...
	I0702 21:37:17.553647    8914 certs.go:226] acquiring lock for ca certs: {Name:mk1563fd1929f66ff1d36559bceb7dd892d19aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:37:17.553823    8914 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.key
	I0702 21:37:17.553876    8914 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/proxy-client-ca.key
	I0702 21:37:17.553883    8914 certs.go:256] generating profile certs ...
	I0702 21:37:17.553960    8914 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/client.key
	I0702 21:37:17.553979    8914 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.key.c154573e
	I0702 21:37:17.553988    8914 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.crt.c154573e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0702 21:37:17.701173    8914 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.crt.c154573e ...
	I0702 21:37:17.701189    8914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.crt.c154573e: {Name:mkffc538c553c82411cd7a5e2a9f64584d49fa3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:37:17.701589    8914 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.key.c154573e ...
	I0702 21:37:17.701596    8914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.key.c154573e: {Name:mkb1593eec78c3bae310795eeae3428ed268c95b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:37:17.701739    8914 certs.go:381] copying /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.crt.c154573e -> /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.crt
	I0702 21:37:17.701876    8914 certs.go:385] copying /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.key.c154573e -> /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.key
	I0702 21:37:17.702031    8914 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/proxy-client.key
	I0702 21:37:17.702169    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/6669.pem (1338 bytes)
	W0702 21:37:17.702203    8914 certs.go:480] ignoring /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/6669_empty.pem, impossibly tiny 0 bytes
	I0702 21:37:17.702212    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca-key.pem (1675 bytes)
	I0702 21:37:17.702235    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem (1078 bytes)
	I0702 21:37:17.702263    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem (1123 bytes)
	I0702 21:37:17.702288    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/key.pem (1675 bytes)
	I0702 21:37:17.702325    8914 certs.go:484] found cert: /Users/jenkins/minikube-integration/19184-6175/.minikube/files/etc/ssl/certs/66692.pem (1708 bytes)
	I0702 21:37:17.702689    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0702 21:37:17.709452    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0702 21:37:17.715963    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0702 21:37:17.723269    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0702 21:37:17.732128    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0702 21:37:17.739322    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0702 21:37:17.746601    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0702 21:37:17.753853    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0702 21:37:17.760594    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/files/etc/ssl/certs/66692.pem --> /usr/share/ca-certificates/66692.pem (1708 bytes)
	I0702 21:37:17.767530    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0702 21:37:17.774573    8914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/6669.pem --> /usr/share/ca-certificates/6669.pem (1338 bytes)
	I0702 21:37:17.781225    8914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0702 21:37:17.785952    8914 ssh_runner.go:195] Run: openssl version
	I0702 21:37:17.787902    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0702 21:37:17.791361    8914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0702 21:37:17.792850    8914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 04:30 /usr/share/ca-certificates/minikubeCA.pem
	I0702 21:37:17.792872    8914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0702 21:37:17.794593    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0702 21:37:17.797462    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6669.pem && ln -fs /usr/share/ca-certificates/6669.pem /etc/ssl/certs/6669.pem"
	I0702 21:37:17.800526    8914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6669.pem
	I0702 21:37:17.802042    8914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 04:19 /usr/share/ca-certificates/6669.pem
	I0702 21:37:17.802061    8914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6669.pem
	I0702 21:37:17.803770    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6669.pem /etc/ssl/certs/51391683.0"
	I0702 21:37:17.807173    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/66692.pem && ln -fs /usr/share/ca-certificates/66692.pem /etc/ssl/certs/66692.pem"
	I0702 21:37:17.810273    8914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/66692.pem
	I0702 21:37:17.811639    8914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 04:19 /usr/share/ca-certificates/66692.pem
	I0702 21:37:17.811660    8914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/66692.pem
	I0702 21:37:17.813485    8914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/66692.pem /etc/ssl/certs/3ec20f2e.0"
	I0702 21:37:17.816482    8914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0702 21:37:17.818071    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0702 21:37:17.820040    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0702 21:37:17.821929    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0702 21:37:17.825163    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0702 21:37:17.826832    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0702 21:37:17.828557    8914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0702 21:37:17.830651    8914 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0702 21:37:17.830729    8914 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0702 21:37:17.841146    8914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0702 21:37:17.844285    8914 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0702 21:37:17.844292    8914 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0702 21:37:17.844295    8914 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0702 21:37:17.844320    8914 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0702 21:37:17.847748    8914 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0702 21:37:17.848060    8914 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-896000" does not appear in /Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:37:17.848181    8914 kubeconfig.go:62] /Users/jenkins/minikube-integration/19184-6175/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-896000" cluster setting kubeconfig missing "stopped-upgrade-896000" context setting]
	I0702 21:37:17.848383    8914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/kubeconfig: {Name:mk27cb7c8451cb331bdc98ce6310b0b3aba92b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:37:17.848808    8914 kapi.go:59] client config for stopped-upgrade-896000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/client.key", CAFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101e21a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0702 21:37:17.849144    8914 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0702 21:37:17.852248    8914 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-896000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0702 21:37:17.852255    8914 kubeadm.go:1154] stopping kube-system containers ...
	I0702 21:37:17.852301    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0702 21:37:17.863539    8914 docker.go:483] Stopping containers: [ada7e661f58d 5162823a6147 82726302ecd9 80469431360e 866bbe2600ef ca658153f418 29fd0adefccd 5cbc16914f5c]
	I0702 21:37:17.863605    8914 ssh_runner.go:195] Run: docker stop ada7e661f58d 5162823a6147 82726302ecd9 80469431360e 866bbe2600ef ca658153f418 29fd0adefccd 5cbc16914f5c
	I0702 21:37:17.874403    8914 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0702 21:37:17.879712    8914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0702 21:37:17.882786    8914 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0702 21:37:17.882792    8914 kubeadm.go:156] found existing configuration files:
	
	I0702 21:37:17.882815    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/admin.conf
	I0702 21:37:17.885213    8914 kubeadm.go:162] "https://control-plane.minikube.internal:51493" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0702 21:37:17.885236    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0702 21:37:17.888074    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/kubelet.conf
	I0702 21:37:17.891029    8914 kubeadm.go:162] "https://control-plane.minikube.internal:51493" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0702 21:37:17.891064    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0702 21:37:17.894058    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/controller-manager.conf
	I0702 21:37:17.896554    8914 kubeadm.go:162] "https://control-plane.minikube.internal:51493" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0702 21:37:17.896575    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0702 21:37:17.899548    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/scheduler.conf
	I0702 21:37:17.902350    8914 kubeadm.go:162] "https://control-plane.minikube.internal:51493" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0702 21:37:17.902382    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0702 21:37:17.904773    8914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0702 21:37:17.907847    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:37:17.932109    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:37:18.466144    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:37:18.596840    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:37:18.626829    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0702 21:37:18.649966    8914 api_server.go:52] waiting for apiserver process to appear ...
	I0702 21:37:18.650040    8914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0702 21:37:19.152129    8914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0702 21:37:19.652131    8914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0702 21:37:19.656347    8914 api_server.go:72] duration metric: took 1.006401458s to wait for apiserver process to appear ...
	I0702 21:37:19.656355    8914 api_server.go:88] waiting for apiserver healthz status ...
	I0702 21:37:19.656369    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:24.658316    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:24.658330    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:29.658520    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:29.658557    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:34.658843    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:34.658862    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:39.659324    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:39.659388    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:44.660163    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:44.660240    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:49.661271    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:49.661294    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:54.662393    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:54.662435    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:37:59.663990    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:37:59.664038    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:04.665933    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:04.665975    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:09.668179    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:09.668201    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:14.669315    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:14.669356    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:19.671506    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:19.671622    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:19.684014    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:38:19.684092    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:19.695053    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:38:19.695122    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:19.705756    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:38:19.705818    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:19.716694    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:38:19.716771    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:19.727515    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:38:19.727588    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:19.738100    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:38:19.738167    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:19.748345    8914 logs.go:276] 0 containers: []
	W0702 21:38:19.748354    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:19.748404    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:19.765888    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:38:19.765904    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:38:19.765910    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:38:19.790271    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:38:19.790283    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:38:19.806788    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:38:19.806799    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:38:19.824425    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:38:19.824434    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:38:19.836281    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:38:19.836293    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:38:19.857400    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:38:19.857412    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:38:19.871867    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:38:19.871880    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:38:19.886921    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:19.886934    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:19.891736    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:19.891742    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:19.992009    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:38:19.992020    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:38:20.023740    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:38:20.023767    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:38:20.038040    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:38:20.038051    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:38:20.049672    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:20.049685    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:20.074632    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:20.074641    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:38:20.111689    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:38:20.111699    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:38:20.123043    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:38:20.123054    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:38:20.133983    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:38:20.133992    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:22.648319    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:27.650631    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:27.650814    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:27.675313    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:38:27.675415    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:27.690365    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:38:27.690445    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:27.704451    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:38:27.704523    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:27.715037    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:38:27.715114    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:27.725280    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:38:27.725349    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:27.735827    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:38:27.735893    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:27.746103    8914 logs.go:276] 0 containers: []
	W0702 21:38:27.746112    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:27.746165    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:27.761305    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:38:27.761325    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:27.761330    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:27.786781    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:38:27.786790    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:27.798661    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:38:27.798676    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:38:27.813280    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:38:27.813290    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:38:27.824803    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:38:27.824814    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:38:27.841977    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:38:27.841987    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:38:27.856591    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:38:27.856601    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:38:27.868188    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:38:27.868200    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:38:27.888782    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:38:27.888792    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:38:27.900988    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:38:27.900999    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:38:27.912205    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:27.912216    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:38:27.949625    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:27.949636    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:27.953888    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:38:27.953894    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:38:27.968661    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:38:27.968672    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:38:27.979766    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:27.979978    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:28.016811    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:38:28.016823    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:38:28.042401    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:38:28.042415    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:38:30.558069    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:35.560233    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:35.560510    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:35.575488    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:38:35.575576    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:35.587715    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:38:35.587791    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:35.602890    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:38:35.602961    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:35.613726    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:38:35.613794    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:35.624255    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:38:35.624319    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:35.634510    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:38:35.634581    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:35.644875    8914 logs.go:276] 0 containers: []
	W0702 21:38:35.644891    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:35.644950    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:35.655315    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:38:35.655335    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:38:35.655340    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:38:35.667353    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:38:35.667365    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:38:35.680814    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:38:35.680825    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:38:35.692051    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:38:35.692063    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:38:35.702686    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:38:35.702698    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:38:35.718049    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:38:35.718060    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:35.729770    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:38:35.729780    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:38:35.748011    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:38:35.748021    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:38:35.772343    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:38:35.772353    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:38:35.786399    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:38:35.786408    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:38:35.797908    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:38:35.797918    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:38:35.814956    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:38:35.814966    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:38:35.827147    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:35.827156    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:35.852495    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:35.852504    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:35.857158    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:38:35.857165    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:38:35.870868    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:35.870878    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:38:35.908777    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:35.908785    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:38.449040    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:43.451240    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:43.451451    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:43.475235    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:38:43.475321    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:43.487900    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:38:43.487977    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:43.498360    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:38:43.498435    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:43.509114    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:38:43.509190    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:43.519738    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:38:43.519812    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:43.530489    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:38:43.530564    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:43.542104    8914 logs.go:276] 0 containers: []
	W0702 21:38:43.542116    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:43.542201    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:43.561708    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:38:43.561726    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:38:43.561731    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:38:43.587281    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:38:43.587293    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:38:43.598828    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:38:43.598838    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:38:43.614745    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:38:43.614757    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:38:43.627867    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:38:43.627877    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:38:43.641549    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:43.641560    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:43.665966    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:38:43.665974    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:43.677377    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:38:43.677388    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:38:43.691447    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:43.691456    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:43.730325    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:38:43.730335    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:38:43.744078    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:38:43.744088    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:38:43.758213    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:38:43.758224    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:38:43.769922    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:43.769932    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:43.773940    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:38:43.773945    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:38:43.789425    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:38:43.789437    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:38:43.807526    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:38:43.807536    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:38:43.823125    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:43.823135    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:38:46.361850    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:51.364102    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:51.364232    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:51.377214    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:38:51.377288    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:51.388019    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:38:51.388086    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:51.398472    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:38:51.398534    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:51.409106    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:38:51.409183    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:51.419178    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:38:51.419243    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:51.435093    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:38:51.435168    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:51.445219    8914 logs.go:276] 0 containers: []
	W0702 21:38:51.445230    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:51.445285    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:51.455569    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:38:51.455588    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:51.455595    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:38:51.495596    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:51.495605    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:51.499862    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:51.499874    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:51.523974    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:38:51.523981    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:38:51.545306    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:38:51.545316    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:38:51.563380    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:38:51.563390    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:38:51.576259    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:38:51.576269    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:38:51.595788    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:38:51.595799    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:38:51.609781    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:38:51.609792    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:38:51.634848    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:38:51.634858    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:38:51.647192    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:38:51.647205    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:38:51.662224    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:38:51.662234    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:38:51.674474    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:38:51.674485    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:38:51.686798    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:51.686811    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:51.728507    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:38:51.728520    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:38:51.749077    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:38:51.749086    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:38:51.760693    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:38:51.760708    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:38:54.274038    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:38:59.276242    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:38:59.276339    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:38:59.288738    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:38:59.288803    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:38:59.299649    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:38:59.299723    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:38:59.309617    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:38:59.309686    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:38:59.320008    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:38:59.320080    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:38:59.330455    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:38:59.330517    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:38:59.340983    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:38:59.341046    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:38:59.351052    8914 logs.go:276] 0 containers: []
	W0702 21:38:59.351065    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:38:59.351149    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:38:59.371649    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:38:59.371666    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:38:59.371671    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:38:59.375927    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:38:59.375934    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:38:59.410656    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:38:59.410667    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:38:59.424750    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:38:59.424765    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:38:59.449609    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:38:59.449617    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:38:59.467341    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:38:59.467351    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:38:59.483155    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:38:59.483165    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:38:59.498191    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:38:59.498204    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:38:59.511438    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:38:59.511450    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:38:59.522978    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:38:59.522988    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:38:59.559760    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:38:59.559768    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:38:59.573512    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:38:59.573524    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:38:59.584682    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:38:59.584698    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:38:59.608185    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:38:59.608195    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:38:59.621952    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:38:59.621963    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:38:59.633753    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:38:59.633763    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:38:59.650568    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:38:59.650578    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:02.165141    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:07.166956    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:07.167151    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:07.185450    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:39:07.185544    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:07.201920    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:39:07.202005    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:07.213305    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:39:07.213382    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:07.223429    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:39:07.223495    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:07.235523    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:39:07.235600    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:07.247475    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:39:07.247543    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:07.258142    8914 logs.go:276] 0 containers: []
	W0702 21:39:07.258156    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:07.258214    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:07.269011    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:39:07.269035    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:39:07.269040    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:39:07.280091    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:39:07.280102    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:39:07.295173    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:39:07.295184    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:39:07.308629    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:39:07.308643    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:39:07.320500    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:39:07.320512    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:39:07.331779    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:39:07.331790    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:07.343134    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:39:07.343146    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:39:07.356964    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:39:07.356973    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:39:07.368178    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:07.368191    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:07.372491    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:39:07.372498    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:39:07.386526    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:07.386536    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:07.429511    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:39:07.429522    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:39:07.455431    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:39:07.455443    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:39:07.472251    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:39:07.472265    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:39:07.483845    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:39:07.483855    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:39:07.500954    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:07.500968    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:07.526013    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:07.526021    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:39:10.066703    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:15.068965    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:15.069210    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:15.092460    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:39:15.092562    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:15.108434    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:39:15.108514    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:15.122010    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:39:15.122085    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:15.132838    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:39:15.132911    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:15.143347    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:39:15.143412    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:15.154275    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:39:15.154349    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:15.165136    8914 logs.go:276] 0 containers: []
	W0702 21:39:15.165147    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:15.165201    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:15.179956    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:39:15.179977    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:39:15.179983    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:39:15.194123    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:39:15.194133    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:39:15.205910    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:39:15.205920    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:39:15.223277    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:39:15.223288    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:39:15.234531    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:39:15.234544    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:39:15.248409    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:39:15.248418    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:39:15.262020    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:15.262031    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:15.288749    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:39:15.288760    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:15.300806    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:39:15.300816    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:39:15.315510    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:15.315520    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:15.350261    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:39:15.350272    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:39:15.375829    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:39:15.375842    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:39:15.394369    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:15.394384    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:15.399626    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:39:15.399646    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:39:15.411906    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:39:15.411918    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:39:15.434491    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:39:15.434502    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:39:15.446306    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:15.446321    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:39:17.985896    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:22.988035    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:22.988217    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:23.003936    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:39:23.004015    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:23.016393    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:39:23.016464    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:23.026422    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:39:23.026494    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:23.037600    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:39:23.037665    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:23.047741    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:39:23.047806    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:23.059170    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:39:23.059234    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:23.069674    8914 logs.go:276] 0 containers: []
	W0702 21:39:23.069685    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:23.069738    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:23.080293    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:39:23.080313    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:23.080319    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:39:23.117102    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:23.117111    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:23.151362    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:39:23.151374    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:39:23.165441    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:39:23.165451    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:39:23.179566    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:23.179577    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:23.204853    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:39:23.204867    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:39:23.216901    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:23.216914    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:23.220977    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:39:23.220985    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:39:23.232482    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:39:23.232493    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:39:23.248586    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:39:23.248599    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:39:23.265903    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:39:23.265915    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:39:23.282469    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:39:23.282479    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:39:23.293995    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:39:23.294020    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:23.306240    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:39:23.306251    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:39:23.331509    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:39:23.331522    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:39:23.345823    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:39:23.345834    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:39:23.362259    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:39:23.362270    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:39:25.880599    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:30.882965    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:30.883243    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:30.912143    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:39:30.912244    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:30.926727    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:39:30.926807    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:30.939242    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:39:30.939323    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:30.951622    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:39:30.951700    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:30.962067    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:39:30.962138    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:30.972589    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:39:30.972663    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:30.982784    8914 logs.go:276] 0 containers: []
	W0702 21:39:30.982794    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:30.982850    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:30.993484    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:39:30.993506    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:39:30.993511    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:39:31.007299    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:39:31.007309    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:39:31.019183    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:39:31.019192    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:39:31.030214    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:39:31.030224    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:39:31.045236    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:39:31.045245    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:39:31.063878    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:31.063888    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:39:31.102593    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:31.102604    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:31.107181    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:39:31.107187    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:39:31.120990    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:39:31.120999    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:39:31.132034    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:39:31.132046    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:39:31.149063    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:39:31.149072    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:39:31.166137    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:31.166147    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:31.200855    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:39:31.200865    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:39:31.225586    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:39:31.225597    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:39:31.240058    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:39:31.240068    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:39:31.252140    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:31.252150    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:31.277064    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:39:31.277074    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:33.792917    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:38.795201    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:38.795346    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:38.815092    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:39:38.815175    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:38.829485    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:39:38.829565    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:38.842557    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:39:38.842630    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:38.854295    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:39:38.854384    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:38.866283    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:39:38.866355    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:38.877909    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:39:38.878004    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:38.890570    8914 logs.go:276] 0 containers: []
	W0702 21:39:38.890581    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:38.890643    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:38.901771    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:39:38.901791    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:38.901798    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:39:38.939195    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:39:38.939205    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:39:38.964098    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:39:38.964108    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:39:38.981083    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:39:38.981094    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:39:38.997301    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:39:38.997312    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:39:39.009854    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:39:39.009864    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:39:39.027191    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:39:39.027201    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:39.039156    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:39.039168    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:39.043720    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:39:39.043729    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:39:39.061392    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:39:39.061401    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:39:39.074093    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:39:39.074105    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:39:39.085579    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:39.085589    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:39.120304    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:39:39.120316    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:39:39.135726    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:39:39.135743    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:39:39.153359    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:39:39.153373    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:39:39.168665    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:39:39.168682    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:39:39.179801    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:39.179811    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:41.707153    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:46.709338    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:46.709514    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:46.725767    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:39:46.725855    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:46.738504    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:39:46.738577    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:46.749808    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:39:46.749878    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:46.767561    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:39:46.767632    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:46.780992    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:39:46.781068    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:46.795382    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:39:46.795453    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:46.805236    8914 logs.go:276] 0 containers: []
	W0702 21:39:46.805247    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:46.805307    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:46.818199    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:39:46.818243    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:46.818250    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:46.822714    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:39:46.822720    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:39:46.846168    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:39:46.846179    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:39:46.860298    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:39:46.860309    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:39:46.872068    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:39:46.872081    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:46.885074    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:39:46.885087    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:39:46.901334    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:39:46.901344    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:39:46.913508    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:39:46.913518    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:39:46.931029    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:46.931039    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:39:46.971084    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:39:46.971095    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:39:46.985400    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:39:46.985410    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:39:46.997289    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:46.997300    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:47.020712    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:47.020720    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:47.059426    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:39:47.059438    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:39:47.074447    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:39:47.074459    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:39:47.086061    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:39:47.086070    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:39:47.099105    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:39:47.099116    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:39:49.613676    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:39:54.615557    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:39:54.615710    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:39:54.631208    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:39:54.631289    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:39:54.643149    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:39:54.643224    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:39:54.653662    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:39:54.653727    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:39:54.664580    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:39:54.664644    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:39:54.675868    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:39:54.675941    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:39:54.686602    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:39:54.686661    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:39:54.697202    8914 logs.go:276] 0 containers: []
	W0702 21:39:54.697213    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:39:54.697269    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:39:54.708100    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:39:54.708117    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:39:54.708122    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:39:54.721608    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:39:54.721618    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:39:54.739283    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:39:54.739294    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:39:54.750944    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:39:54.750953    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:39:54.775095    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:39:54.775102    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:39:54.779635    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:39:54.779645    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:39:54.805076    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:39:54.805088    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:39:54.822094    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:39:54.822106    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:39:54.833245    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:39:54.833260    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:39:54.845123    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:39:54.845135    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:39:54.856822    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:39:54.856833    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:39:54.868622    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:39:54.868633    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:39:54.883953    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:39:54.883964    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:39:54.897258    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:39:54.897270    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:39:54.935627    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:39:54.935637    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:39:54.949568    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:39:54.949582    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:39:54.983351    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:39:54.983363    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:39:57.499889    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:40:02.502311    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:40:02.502660    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:40:02.534905    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:40:02.535037    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:40:02.553872    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:40:02.553970    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:40:02.570292    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:40:02.570362    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:40:02.581718    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:40:02.581784    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:40:02.592321    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:40:02.592398    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:40:02.603355    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:40:02.603420    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:40:02.613892    8914 logs.go:276] 0 containers: []
	W0702 21:40:02.613904    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:40:02.613965    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:40:02.624707    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:40:02.624724    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:40:02.624730    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:40:02.641166    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:40:02.641179    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:40:02.666863    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:40:02.666873    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:40:02.681622    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:40:02.681632    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:40:02.693696    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:40:02.693707    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:40:02.718400    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:40:02.718416    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:40:02.730499    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:40:02.730511    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:40:02.767310    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:40:02.767319    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:40:02.786126    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:40:02.786135    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:40:02.803624    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:40:02.803634    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:40:02.814723    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:40:02.814733    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:40:02.819334    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:40:02.819341    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:40:02.838045    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:40:02.838057    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:40:02.851514    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:40:02.851525    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:40:02.887122    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:40:02.887132    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:40:02.902439    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:40:02.902451    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:40:02.914097    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:40:02.914111    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:40:05.431828    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:40:10.434308    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:40:10.434603    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:40:10.461843    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:40:10.461968    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:40:10.479259    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:40:10.479355    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:40:10.493018    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:40:10.493092    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:40:10.504858    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:40:10.504938    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:40:10.515701    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:40:10.515770    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:40:10.526068    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:40:10.526141    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:40:10.535819    8914 logs.go:276] 0 containers: []
	W0702 21:40:10.535830    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:40:10.535893    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:40:10.546578    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:40:10.546594    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:40:10.546599    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:40:10.570956    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:40:10.570973    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:40:10.586583    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:40:10.586592    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:40:10.599572    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:40:10.599582    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:40:10.613842    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:40:10.613851    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:40:10.627517    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:40:10.627528    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:40:10.644362    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:40:10.644372    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:40:10.668432    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:40:10.668443    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:40:10.683200    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:40:10.683209    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:40:10.694681    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:40:10.694693    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:40:10.707514    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:40:10.707525    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:40:10.712187    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:40:10.712195    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:40:10.747213    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:40:10.747226    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:40:10.767308    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:40:10.767321    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:40:10.778316    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:40:10.778326    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:40:10.816656    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:40:10.816665    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:40:10.828332    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:40:10.828346    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:40:13.342068    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:40:18.344404    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:40:18.344551    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:40:18.358369    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:40:18.358453    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:40:18.370293    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:40:18.370366    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:40:18.380768    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:40:18.380840    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:40:18.391731    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:40:18.391806    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:40:18.402704    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:40:18.402780    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:40:18.413238    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:40:18.413309    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:40:18.423349    8914 logs.go:276] 0 containers: []
	W0702 21:40:18.423357    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:40:18.423407    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:40:18.433977    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:40:18.433996    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:40:18.434004    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:40:18.438364    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:40:18.438372    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:40:18.475859    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:40:18.475870    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:40:18.491348    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:40:18.491358    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:40:18.515440    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:40:18.515454    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:40:18.538058    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:40:18.538066    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:40:18.550400    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:40:18.550415    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:40:18.574424    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:40:18.574434    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:40:18.585456    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:40:18.585466    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:40:18.598207    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:40:18.598217    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:40:18.610043    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:40:18.610058    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:40:18.626911    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:40:18.626921    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:40:18.638653    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:40:18.638662    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:40:18.649809    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:40:18.649819    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:40:18.687082    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:40:18.687089    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:40:18.702608    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:40:18.702617    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:40:18.717338    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:40:18.717347    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:40:21.230515    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:40:26.233232    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:40:26.233637    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:40:26.277970    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:40:26.278094    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:40:26.307204    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:40:26.307284    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:40:26.320118    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:40:26.320184    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:40:26.331053    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:40:26.331121    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:40:26.345396    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:40:26.345456    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:40:26.355792    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:40:26.355856    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:40:26.366662    8914 logs.go:276] 0 containers: []
	W0702 21:40:26.366675    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:40:26.366732    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:40:26.385367    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:40:26.385387    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:40:26.385392    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:40:26.409057    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:40:26.409067    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:40:26.420624    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:40:26.420635    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:40:26.445456    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:40:26.445467    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:40:26.456710    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:40:26.456722    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:40:26.473809    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:40:26.473821    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:40:26.509155    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:40:26.509169    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:40:26.521537    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:40:26.521549    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:40:26.532851    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:40:26.532863    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:40:26.545564    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:40:26.545574    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:40:26.556327    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:40:26.556338    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:40:26.594408    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:40:26.594416    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:40:26.608404    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:40:26.608416    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:40:26.622633    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:40:26.622643    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:40:26.637880    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:40:26.637890    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:40:26.650199    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:40:26.650213    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:40:26.654752    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:40:26.654761    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:40:29.169899    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:40:34.170490    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:40:34.170584    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:40:34.184713    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:40:34.184777    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:40:34.197827    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:40:34.197925    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:40:34.210541    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:40:34.210601    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:40:34.224043    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:40:34.224103    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:40:34.235904    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:40:34.235963    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:40:34.247877    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:40:34.247923    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:40:34.260230    8914 logs.go:276] 0 containers: []
	W0702 21:40:34.260246    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:40:34.260357    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:40:34.274062    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:40:34.274085    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:40:34.274092    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:40:34.288044    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:40:34.288054    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:40:34.302377    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:40:34.302392    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:40:34.327814    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:40:34.327827    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:40:34.354138    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:40:34.354149    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:40:34.372235    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:40:34.372248    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:40:34.387036    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:40:34.387049    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:40:34.398714    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:40:34.398724    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:40:34.437326    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:40:34.437334    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:40:34.445722    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:40:34.445731    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:40:34.459811    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:40:34.459822    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:40:34.472289    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:40:34.472298    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:40:34.487077    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:40:34.487087    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:40:34.503049    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:40:34.503058    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:40:34.522940    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:40:34.522950    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:40:34.558169    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:40:34.558179    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:40:34.573186    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:40:34.573196    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:40:37.085463    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:40:42.087722    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:40:42.087890    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:40:42.105574    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:40:42.105656    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:40:42.118629    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:40:42.118701    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:40:42.130309    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:40:42.130375    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:40:42.141189    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:40:42.141267    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:40:42.151377    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:40:42.151438    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:40:42.162557    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:40:42.162623    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:40:42.172627    8914 logs.go:276] 0 containers: []
	W0702 21:40:42.172638    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:40:42.172694    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:40:42.182973    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:40:42.182993    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:40:42.182998    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:40:42.194140    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:40:42.194151    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:40:42.211410    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:40:42.211421    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:40:42.222671    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:40:42.222683    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:40:42.245394    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:40:42.245403    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:40:42.269895    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:40:42.269906    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:40:42.283525    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:40:42.283538    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:40:42.294899    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:40:42.294911    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:40:42.307269    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:40:42.307280    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:40:42.343577    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:40:42.343586    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:40:42.358175    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:40:42.358184    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:40:42.370347    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:40:42.370360    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:40:42.385115    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:40:42.385127    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:40:42.398671    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:40:42.398682    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:40:42.412665    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:40:42.412674    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:40:42.466226    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:40:42.466240    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:40:42.498792    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:40:42.498805    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:40:45.005202    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:40:50.007437    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:40:50.007633    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:40:50.026797    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:40:50.026890    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:40:50.040855    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:40:50.040933    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:40:50.052698    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:40:50.052760    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:40:50.067044    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:40:50.067108    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:40:50.076918    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:40:50.076986    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:40:50.087114    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:40:50.087184    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:40:50.097434    8914 logs.go:276] 0 containers: []
	W0702 21:40:50.097451    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:40:50.097518    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:40:50.107599    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:40:50.107615    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:40:50.107623    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:40:50.111814    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:40:50.111823    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:40:50.149096    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:40:50.149105    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:40:50.162973    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:40:50.162984    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:40:50.174232    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:40:50.174246    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:40:50.186051    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:40:50.186061    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:40:50.198062    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:40:50.198071    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:40:50.216765    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:40:50.216775    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:40:50.241995    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:40:50.242004    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:40:50.257286    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:40:50.257299    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:40:50.269623    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:40:50.269634    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:40:50.283114    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:40:50.283124    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:40:50.294744    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:40:50.294755    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:40:50.331832    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:40:50.331842    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:40:50.348927    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:40:50.348936    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:40:50.363548    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:40:50.363559    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:40:50.374890    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:40:50.374900    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:40:52.899142    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:40:57.900987    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:40:57.901457    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:40:57.941234    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:40:57.941371    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:40:57.964639    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:40:57.964755    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:40:57.979648    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:40:57.979732    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:40:57.992135    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:40:57.992210    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:40:58.004704    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:40:58.004778    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:40:58.015546    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:40:58.015620    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:40:58.026252    8914 logs.go:276] 0 containers: []
	W0702 21:40:58.026270    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:40:58.026322    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:40:58.041364    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:40:58.041382    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:40:58.041387    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:40:58.054361    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:40:58.054375    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:40:58.058471    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:40:58.058477    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:40:58.072604    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:40:58.072618    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:40:58.099413    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:40:58.099422    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:40:58.121993    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:40:58.122005    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:40:58.139369    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:40:58.139381    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:40:58.161220    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:40:58.161229    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:40:58.194835    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:40:58.194846    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:40:58.209461    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:40:58.209471    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:40:58.221850    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:40:58.221860    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:40:58.237250    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:40:58.237263    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:40:58.275730    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:40:58.275738    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:40:58.288561    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:40:58.288572    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:40:58.303984    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:40:58.303997    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:40:58.316339    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:40:58.316350    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:40:58.330173    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:40:58.330185    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:41:00.848409    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:41:05.851128    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:41:05.851605    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:41:05.893097    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:41:05.893235    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:41:05.915140    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:41:05.915257    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:41:05.931511    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:41:05.931575    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:41:05.944321    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:41:05.944388    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:41:05.955217    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:41:05.955285    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:41:05.970363    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:41:05.970429    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:41:05.980467    8914 logs.go:276] 0 containers: []
	W0702 21:41:05.980477    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:41:05.980534    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:41:05.991422    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:41:05.991447    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:41:05.991453    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:41:06.025288    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:41:06.025300    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:41:06.040259    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:41:06.040272    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:41:06.051971    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:41:06.051981    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:41:06.070379    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:41:06.070391    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:41:06.095856    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:41:06.095867    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:41:06.111561    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:41:06.111574    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:41:06.122919    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:41:06.122931    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:41:06.160751    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:41:06.160760    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:41:06.164950    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:41:06.164958    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:41:06.176350    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:41:06.176361    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:41:06.187662    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:41:06.187673    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:41:06.209070    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:41:06.209077    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:41:06.220628    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:41:06.220638    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:41:06.238272    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:41:06.238283    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:41:06.250134    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:41:06.250146    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:41:06.273280    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:41:06.273291    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:41:08.786961    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:41:13.789621    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:41:13.790046    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:41:13.829822    8914 logs.go:276] 2 containers: [ad463abcc362 80469431360e]
	I0702 21:41:13.829943    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:41:13.852516    8914 logs.go:276] 2 containers: [c5d861478edb ada7e661f58d]
	I0702 21:41:13.852629    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:41:13.867974    8914 logs.go:276] 1 containers: [844d5cf25e26]
	I0702 21:41:13.868046    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:41:13.881222    8914 logs.go:276] 2 containers: [d2cb916b520b 5162823a6147]
	I0702 21:41:13.881300    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:41:13.892025    8914 logs.go:276] 1 containers: [5627b5bc64c0]
	I0702 21:41:13.892094    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:41:13.903101    8914 logs.go:276] 2 containers: [8e369ee0fb12 82726302ecd9]
	I0702 21:41:13.903167    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:41:13.914675    8914 logs.go:276] 0 containers: []
	W0702 21:41:13.914686    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:41:13.914746    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:41:13.925700    8914 logs.go:276] 2 containers: [d33a415724d7 ee490863a876]
	I0702 21:41:13.925721    8914 logs.go:123] Gathering logs for coredns [844d5cf25e26] ...
	I0702 21:41:13.925727    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 844d5cf25e26"
	I0702 21:41:13.937114    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:41:13.937124    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:41:13.976016    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:41:13.976026    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:41:14.011822    8914 logs.go:123] Gathering logs for kube-apiserver [80469431360e] ...
	I0702 21:41:14.011836    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80469431360e"
	I0702 21:41:14.043803    8914 logs.go:123] Gathering logs for kube-scheduler [d2cb916b520b] ...
	I0702 21:41:14.043815    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2cb916b520b"
	I0702 21:41:14.055856    8914 logs.go:123] Gathering logs for storage-provisioner [d33a415724d7] ...
	I0702 21:41:14.055868    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d33a415724d7"
	I0702 21:41:14.067102    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:41:14.067114    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:41:14.080206    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:41:14.080219    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:41:14.084180    8914 logs.go:123] Gathering logs for etcd [c5d861478edb] ...
	I0702 21:41:14.084188    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5d861478edb"
	I0702 21:41:14.097707    8914 logs.go:123] Gathering logs for kube-scheduler [5162823a6147] ...
	I0702 21:41:14.097717    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5162823a6147"
	I0702 21:41:14.112904    8914 logs.go:123] Gathering logs for storage-provisioner [ee490863a876] ...
	I0702 21:41:14.112915    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee490863a876"
	I0702 21:41:14.123536    8914 logs.go:123] Gathering logs for kube-apiserver [ad463abcc362] ...
	I0702 21:41:14.123547    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad463abcc362"
	I0702 21:41:14.153443    8914 logs.go:123] Gathering logs for etcd [ada7e661f58d] ...
	I0702 21:41:14.153455    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ada7e661f58d"
	I0702 21:41:14.168900    8914 logs.go:123] Gathering logs for kube-proxy [5627b5bc64c0] ...
	I0702 21:41:14.168913    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5627b5bc64c0"
	I0702 21:41:14.180792    8914 logs.go:123] Gathering logs for kube-controller-manager [8e369ee0fb12] ...
	I0702 21:41:14.180804    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e369ee0fb12"
	I0702 21:41:14.198355    8914 logs.go:123] Gathering logs for kube-controller-manager [82726302ecd9] ...
	I0702 21:41:14.198365    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82726302ecd9"
	I0702 21:41:14.211304    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:41:14.211318    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:41:16.735296    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:41:21.736911    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:41:21.737083    8914 kubeadm.go:591] duration metric: took 4m3.897555542s to restartPrimaryControlPlane
	W0702 21:41:21.737192    8914 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0702 21:41:21.737262    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0702 21:41:22.813748    8914 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.076487792s)
	I0702 21:41:22.813810    8914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0702 21:41:22.818861    8914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0702 21:41:22.821609    8914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0702 21:41:22.824439    8914 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0702 21:41:22.824445    8914 kubeadm.go:156] found existing configuration files:
	
	I0702 21:41:22.824465    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/admin.conf
	I0702 21:41:22.826882    8914 kubeadm.go:162] "https://control-plane.minikube.internal:51493" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0702 21:41:22.826903    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0702 21:41:22.829619    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/kubelet.conf
	I0702 21:41:22.832640    8914 kubeadm.go:162] "https://control-plane.minikube.internal:51493" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0702 21:41:22.832662    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0702 21:41:22.835270    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/controller-manager.conf
	I0702 21:41:22.837628    8914 kubeadm.go:162] "https://control-plane.minikube.internal:51493" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0702 21:41:22.837649    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0702 21:41:22.840615    8914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/scheduler.conf
	I0702 21:41:22.843083    8914 kubeadm.go:162] "https://control-plane.minikube.internal:51493" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51493 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0702 21:41:22.843104    8914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0702 21:41:22.845593    8914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0702 21:41:22.862331    8914 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0702 21:41:22.862373    8914 kubeadm.go:309] [preflight] Running pre-flight checks
	I0702 21:41:22.911468    8914 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0702 21:41:22.911534    8914 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0702 21:41:22.911583    8914 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0702 21:41:22.961536    8914 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0702 21:41:22.965739    8914 out.go:204]   - Generating certificates and keys ...
	I0702 21:41:22.965771    8914 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0702 21:41:22.965809    8914 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0702 21:41:22.965856    8914 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0702 21:41:22.965888    8914 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0702 21:41:22.965923    8914 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0702 21:41:22.965958    8914 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0702 21:41:22.965996    8914 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0702 21:41:22.966025    8914 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0702 21:41:22.966066    8914 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0702 21:41:22.966108    8914 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0702 21:41:22.966127    8914 kubeadm.go:309] [certs] Using the existing "sa" key
	I0702 21:41:22.966160    8914 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0702 21:41:23.018130    8914 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0702 21:41:23.095781    8914 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0702 21:41:23.182309    8914 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0702 21:41:23.342633    8914 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0702 21:41:23.377058    8914 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0702 21:41:23.377387    8914 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0702 21:41:23.377446    8914 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0702 21:41:23.461385    8914 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0702 21:41:23.465334    8914 out.go:204]   - Booting up control plane ...
	I0702 21:41:23.465380    8914 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0702 21:41:23.465418    8914 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0702 21:41:23.465449    8914 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0702 21:41:23.465494    8914 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0702 21:41:23.467493    8914 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0702 21:41:28.474520    8914 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.006420 seconds
	I0702 21:41:28.474776    8914 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0702 21:41:28.490702    8914 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0702 21:41:29.005993    8914 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0702 21:41:29.006108    8914 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-896000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0702 21:41:29.516144    8914 kubeadm.go:309] [bootstrap-token] Using token: bqkqks.pexoy18eetk15eux
	I0702 21:41:29.522326    8914 out.go:204]   - Configuring RBAC rules ...
	I0702 21:41:29.522462    8914 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0702 21:41:29.522594    8914 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0702 21:41:29.526309    8914 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0702 21:41:29.528594    8914 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0702 21:41:29.530335    8914 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0702 21:41:29.532013    8914 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0702 21:41:29.538371    8914 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0702 21:41:29.701673    8914 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0702 21:41:29.922641    8914 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0702 21:41:29.923125    8914 kubeadm.go:309] 
	I0702 21:41:29.923154    8914 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0702 21:41:29.923158    8914 kubeadm.go:309] 
	I0702 21:41:29.923202    8914 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0702 21:41:29.923208    8914 kubeadm.go:309] 
	I0702 21:41:29.923220    8914 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0702 21:41:29.923252    8914 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0702 21:41:29.923284    8914 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0702 21:41:29.923287    8914 kubeadm.go:309] 
	I0702 21:41:29.923310    8914 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0702 21:41:29.923313    8914 kubeadm.go:309] 
	I0702 21:41:29.923335    8914 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0702 21:41:29.923338    8914 kubeadm.go:309] 
	I0702 21:41:29.923368    8914 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0702 21:41:29.923406    8914 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0702 21:41:29.923448    8914 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0702 21:41:29.923455    8914 kubeadm.go:309] 
	I0702 21:41:29.923513    8914 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0702 21:41:29.923559    8914 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0702 21:41:29.923562    8914 kubeadm.go:309] 
	I0702 21:41:29.923611    8914 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bqkqks.pexoy18eetk15eux \
	I0702 21:41:29.923665    8914 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4ab8010a117a4bd6be25efd6459f56a0fb2de6896b05d4e484fc24c43035dfd9 \
	I0702 21:41:29.923677    8914 kubeadm.go:309] 	--control-plane 
	I0702 21:41:29.923681    8914 kubeadm.go:309] 
	I0702 21:41:29.923730    8914 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0702 21:41:29.923733    8914 kubeadm.go:309] 
	I0702 21:41:29.923777    8914 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bqkqks.pexoy18eetk15eux \
	I0702 21:41:29.923833    8914 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4ab8010a117a4bd6be25efd6459f56a0fb2de6896b05d4e484fc24c43035dfd9 
	I0702 21:41:29.924033    8914 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0702 21:41:29.924060    8914 cni.go:84] Creating CNI manager for ""
	I0702 21:41:29.924071    8914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:41:29.931170    8914 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0702 21:41:29.935161    8914 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0702 21:41:29.938254    8914 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0702 21:41:29.943215    8914 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0702 21:41:29.943260    8914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0702 21:41:29.943272    8914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-896000 minikube.k8s.io/updated_at=2024_07_02T21_41_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6e34d4fd348f73f0f8af294cc2737aeb8da39e8d minikube.k8s.io/name=stopped-upgrade-896000 minikube.k8s.io/primary=true
	I0702 21:41:29.983548    8914 kubeadm.go:1107] duration metric: took 40.327542ms to wait for elevateKubeSystemPrivileges
	I0702 21:41:29.983554    8914 ops.go:34] apiserver oom_adj: -16
	W0702 21:41:29.983633    8914 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0702 21:41:29.983639    8914 kubeadm.go:393] duration metric: took 4m12.157931416s to StartCluster
	I0702 21:41:29.983649    8914 settings.go:142] acquiring lock: {Name:mkd9027dadc8b50e6398a16ff695ba9d1e13b355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:41:29.983735    8914 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:41:29.984052    8914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/kubeconfig: {Name:mk27cb7c8451cb331bdc98ce6310b0b3aba92b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:41:29.984263    8914 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:41:29.984272    8914 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0702 21:41:29.984306    8914 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-896000"
	I0702 21:41:29.984321    8914 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-896000"
	W0702 21:41:29.984323    8914 addons.go:243] addon storage-provisioner should already be in state true
	I0702 21:41:29.984334    8914 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-896000"
	I0702 21:41:29.984342    8914 host.go:66] Checking if "stopped-upgrade-896000" exists ...
	I0702 21:41:29.984348    8914 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-896000"
	I0702 21:41:29.984361    8914 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:41:29.988172    8914 out.go:177] * Verifying Kubernetes components...
	I0702 21:41:29.988756    8914 kapi.go:59] client config for stopped-upgrade-896000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/stopped-upgrade-896000/client.key", CAFile:"/Users/jenkins/minikube-integration/19184-6175/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101e21a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0702 21:41:29.995094    8914 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-896000"
	W0702 21:41:29.995099    8914 addons.go:243] addon default-storageclass should already be in state true
	I0702 21:41:29.995108    8914 host.go:66] Checking if "stopped-upgrade-896000" exists ...
	I0702 21:41:29.995668    8914 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0702 21:41:29.995674    8914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0702 21:41:29.995679    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/id_rsa Username:docker}
	I0702 21:41:30.000194    8914 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0702 21:41:30.003195    8914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0702 21:41:30.007047    8914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0702 21:41:30.007053    8914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0702 21:41:30.007058    8914 sshutil.go:53] new ssh client: &{IP:localhost Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/stopped-upgrade-896000/id_rsa Username:docker}
	I0702 21:41:30.102648    8914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0702 21:41:30.107426    8914 api_server.go:52] waiting for apiserver process to appear ...
	I0702 21:41:30.107465    8914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0702 21:41:30.111624    8914 api_server.go:72] duration metric: took 127.349792ms to wait for apiserver process to appear ...
	I0702 21:41:30.111631    8914 api_server.go:88] waiting for apiserver healthz status ...
	I0702 21:41:30.111638    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:41:30.115823    8914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0702 21:41:30.181004    8914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0702 21:41:35.113621    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:41:35.113642    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:41:40.114123    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:41:40.114188    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:41:45.114780    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:41:45.114800    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:41:50.115319    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:41:50.115385    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:41:55.116199    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:41:55.116236    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:42:00.117239    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:42:00.117309    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0702 21:42:00.487167    8914 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0702 21:42:00.497521    8914 out.go:177] * Enabled addons: storage-provisioner
	I0702 21:42:00.504553    8914 addons.go:510] duration metric: took 30.520873875s for enable addons: enabled=[storage-provisioner]
	I0702 21:42:05.119366    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:42:05.119402    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:42:10.121580    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:42:10.121599    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:42:15.123775    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:42:15.123858    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:42:20.126485    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:42:20.126556    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:42:25.127661    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:42:25.127706    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:42:30.129884    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:42:30.129948    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:42:30.140940    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:42:30.141003    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:42:30.151897    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:42:30.151965    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:42:30.162683    8914 logs.go:276] 2 containers: [a988211d67d1 4de79ba963c9]
	I0702 21:42:30.162752    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:42:30.172907    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:42:30.172971    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:42:30.184551    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:42:30.184612    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:42:30.203441    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:42:30.203525    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:42:30.225072    8914 logs.go:276] 0 containers: []
	W0702 21:42:30.225084    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:42:30.225137    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:42:30.236090    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:42:30.236106    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:42:30.236112    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:42:30.253873    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:42:30.253883    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:42:30.265588    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:42:30.265597    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:42:30.278439    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:42:30.278451    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:42:30.314416    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:42:30.314423    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:42:30.328563    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:42:30.328571    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:42:30.339873    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:42:30.339882    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:42:30.351180    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:42:30.351191    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:42:30.363770    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:42:30.363781    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:42:30.378555    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:42:30.378565    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:42:30.404112    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:42:30.404124    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:42:30.408875    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:42:30.408899    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:42:30.447140    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:42:30.447155    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:42:32.961154    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:42:37.963318    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:42:37.963806    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:42:38.004114    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:42:38.004255    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:42:38.032296    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:42:38.032404    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:42:38.046717    8914 logs.go:276] 2 containers: [a988211d67d1 4de79ba963c9]
	I0702 21:42:38.046799    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:42:38.058535    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:42:38.058603    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:42:38.069979    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:42:38.070051    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:42:38.082927    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:42:38.082997    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:42:38.094886    8914 logs.go:276] 0 containers: []
	W0702 21:42:38.094899    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:42:38.094955    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:42:38.105227    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:42:38.105245    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:42:38.105251    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:42:38.109686    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:42:38.109695    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:42:38.123336    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:42:38.123345    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:42:38.135015    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:42:38.135028    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:42:38.146658    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:42:38.146669    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:42:38.161929    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:42:38.161940    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:42:38.173809    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:42:38.173818    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:42:38.198730    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:42:38.198738    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:42:38.210006    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:42:38.210019    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:42:38.246602    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:42:38.246612    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:42:38.281224    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:42:38.281235    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:42:38.297349    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:42:38.297359    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:42:38.308872    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:42:38.308885    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:42:40.830118    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:42:45.832717    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:42:45.832769    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:42:45.844399    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:42:45.844460    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:42:45.856766    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:42:45.856836    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:42:45.868620    8914 logs.go:276] 2 containers: [a988211d67d1 4de79ba963c9]
	I0702 21:42:45.868673    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:42:45.879439    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:42:45.879502    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:42:45.890934    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:42:45.890985    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:42:45.903901    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:42:45.903961    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:42:45.914385    8914 logs.go:276] 0 containers: []
	W0702 21:42:45.914394    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:42:45.914443    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:42:45.929168    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:42:45.929186    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:42:45.929192    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:42:45.954131    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:42:45.954140    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:42:45.966411    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:42:45.966422    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:42:45.982358    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:42:45.982372    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:42:45.997187    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:42:45.997195    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:42:46.012572    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:42:46.012583    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:42:46.025664    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:42:46.025679    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:42:46.044183    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:42:46.044192    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:42:46.057326    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:42:46.057338    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:42:46.098098    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:42:46.098122    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:42:46.103527    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:42:46.103538    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:42:46.141285    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:42:46.141298    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:42:46.155039    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:42:46.155050    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:42:48.673415    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:42:53.676148    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:42:53.676609    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:42:53.718273    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:42:53.718396    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:42:53.741082    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:42:53.741202    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:42:53.756480    8914 logs.go:276] 2 containers: [a988211d67d1 4de79ba963c9]
	I0702 21:42:53.756553    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:42:53.769110    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:42:53.769183    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:42:53.780203    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:42:53.780270    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:42:53.794294    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:42:53.794361    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:42:53.805203    8914 logs.go:276] 0 containers: []
	W0702 21:42:53.805215    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:42:53.805265    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:42:53.816806    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:42:53.816822    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:42:53.816827    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:42:53.831787    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:42:53.831799    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:42:53.848272    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:42:53.848288    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:42:53.862754    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:42:53.862767    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:42:53.876924    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:42:53.876938    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:42:53.902018    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:42:53.902056    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:42:53.915836    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:42:53.915849    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:42:53.921277    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:42:53.921291    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:42:53.960801    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:42:53.960815    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:42:53.976135    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:42:53.976146    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:42:53.994371    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:42:53.994381    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:42:54.006254    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:42:54.006265    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:42:54.044594    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:42:54.044603    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:42:56.559508    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:43:01.561600    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:43:01.561799    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:43:01.586949    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:43:01.587056    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:43:01.603738    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:43:01.603813    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:43:01.616597    8914 logs.go:276] 2 containers: [a988211d67d1 4de79ba963c9]
	I0702 21:43:01.616659    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:43:01.628148    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:43:01.628213    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:43:01.638529    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:43:01.638588    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:43:01.648666    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:43:01.648720    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:43:01.659526    8914 logs.go:276] 0 containers: []
	W0702 21:43:01.659541    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:43:01.659593    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:43:01.670145    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:43:01.670161    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:43:01.670166    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:43:01.694939    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:43:01.694946    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:43:01.706421    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:43:01.706435    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:43:01.744008    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:43:01.744018    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:43:01.748617    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:43:01.748625    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:43:01.759980    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:43:01.759992    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:43:01.775053    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:43:01.775064    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:43:01.788517    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:43:01.788531    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:43:01.804932    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:43:01.804941    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:43:01.839003    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:43:01.839017    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:43:01.853328    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:43:01.853341    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:43:01.866861    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:43:01.866874    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:43:01.877824    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:43:01.877834    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:43:04.396829    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:43:09.397682    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:43:09.397791    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:43:09.408753    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:43:09.408812    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:43:09.419896    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:43:09.419978    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:43:09.434345    8914 logs.go:276] 2 containers: [a988211d67d1 4de79ba963c9]
	I0702 21:43:09.434410    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:43:09.446466    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:43:09.446548    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:43:09.457418    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:43:09.457490    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:43:09.468477    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:43:09.468532    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:43:09.483184    8914 logs.go:276] 0 containers: []
	W0702 21:43:09.483193    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:43:09.483236    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:43:09.495235    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:43:09.495255    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:43:09.495260    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:43:09.507233    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:43:09.507242    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:43:09.511809    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:43:09.511820    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:43:09.553551    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:43:09.553566    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:43:09.572790    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:43:09.572807    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:43:09.586096    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:43:09.586106    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:43:09.605250    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:43:09.605261    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:43:09.624282    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:43:09.624291    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:43:09.663106    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:43:09.663118    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:43:09.677473    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:43:09.677488    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:43:09.690548    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:43:09.690561    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:43:09.707508    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:43:09.707518    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:43:09.720359    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:43:09.720369    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:43:12.248828    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:43:17.251649    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:43:17.252052    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:43:17.288530    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:43:17.288656    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:43:17.311822    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:43:17.311933    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:43:17.326804    8914 logs.go:276] 2 containers: [a988211d67d1 4de79ba963c9]
	I0702 21:43:17.326866    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:43:17.338229    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:43:17.338298    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:43:17.348942    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:43:17.349012    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:43:17.359472    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:43:17.359531    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:43:17.369826    8914 logs.go:276] 0 containers: []
	W0702 21:43:17.369842    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:43:17.369893    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:43:17.380454    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:43:17.380471    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:43:17.380476    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:43:17.398544    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:43:17.398557    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:43:17.409640    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:43:17.409653    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:43:17.422054    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:43:17.422064    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:43:17.441291    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:43:17.441300    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:43:17.453328    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:43:17.453342    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:43:17.476596    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:43:17.476603    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:43:17.487495    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:43:17.487510    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:43:17.524664    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:43:17.524673    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:43:17.528994    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:43:17.529001    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:43:17.562082    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:43:17.562092    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:43:17.576654    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:43:17.576664    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:43:17.588038    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:43:17.588049    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:43:20.127949    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:43:25.129623    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:43:25.129891    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:43:25.155797    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:43:25.155922    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:43:25.172676    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:43:25.172756    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:43:25.185958    8914 logs.go:276] 2 containers: [a988211d67d1 4de79ba963c9]
	I0702 21:43:25.186028    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:43:25.197173    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:43:25.197237    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:43:25.208188    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:43:25.208261    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:43:25.218379    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:43:25.218445    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:43:25.228167    8914 logs.go:276] 0 containers: []
	W0702 21:43:25.228181    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:43:25.228240    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:43:25.245098    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:43:25.245115    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:43:25.245121    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:43:25.278946    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:43:25.278960    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:43:25.293338    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:43:25.293352    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:43:25.307554    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:43:25.307566    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:43:25.319586    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:43:25.319599    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:43:25.334106    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:43:25.334118    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:43:25.355070    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:43:25.355080    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:43:25.380111    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:43:25.380120    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:43:25.384254    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:43:25.384262    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:43:25.395325    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:43:25.395338    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:43:25.407091    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:43:25.407102    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:43:25.418659    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:43:25.418670    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:43:25.432874    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:43:25.432885    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:43:27.973419    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:43:32.975932    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:43:32.976020    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:43:32.987764    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:43:32.987816    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:43:33.002275    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:43:33.002337    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:43:33.013501    8914 logs.go:276] 2 containers: [a988211d67d1 4de79ba963c9]
	I0702 21:43:33.013550    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:43:33.030095    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:43:33.030139    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:43:33.040957    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:43:33.041022    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:43:33.052084    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:43:33.052156    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:43:33.062774    8914 logs.go:276] 0 containers: []
	W0702 21:43:33.062787    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:43:33.062847    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:43:33.074479    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:43:33.074496    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:43:33.074502    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:43:33.094782    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:43:33.094792    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:43:33.107285    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:43:33.107297    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:43:33.126313    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:43:33.126324    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:43:33.165066    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:43:33.165079    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:43:33.169455    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:43:33.169463    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:43:33.187598    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:43:33.187609    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:43:33.201047    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:43:33.201059    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:43:33.213437    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:43:33.213447    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:43:33.251283    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:43:33.251292    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:43:33.265730    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:43:33.265742    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:43:33.291088    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:43:33.291102    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:43:33.315280    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:43:33.315296    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:43:35.830388    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:43:40.832749    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:43:40.833230    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:43:40.875244    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:43:40.875377    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:43:40.896058    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:43:40.896170    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:43:40.910171    8914 logs.go:276] 2 containers: [a988211d67d1 4de79ba963c9]
	I0702 21:43:40.910239    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:43:40.921927    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:43:40.921995    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:43:40.933392    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:43:40.933462    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:43:40.943636    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:43:40.943700    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:43:40.953784    8914 logs.go:276] 0 containers: []
	W0702 21:43:40.953796    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:43:40.953849    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:43:40.964370    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:43:40.964386    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:43:40.964392    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:43:40.975550    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:43:40.975561    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:43:40.992768    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:43:40.992778    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:43:41.004492    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:43:41.004501    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:43:41.027584    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:43:41.027591    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:43:41.031904    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:43:41.031912    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:43:41.066584    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:43:41.066596    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:43:41.081674    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:43:41.081684    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:43:41.093170    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:43:41.093182    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:43:41.107707    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:43:41.107715    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:43:41.118947    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:43:41.118958    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:43:41.156873    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:43:41.156882    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:43:41.170657    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:43:41.170669    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:43:43.684449    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:43:48.687090    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:43:48.687545    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:43:48.727275    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:43:48.727410    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:43:48.748441    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:43:48.748546    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:43:48.764664    8914 logs.go:276] 4 containers: [45017d44a390 bfd1c845f987 a988211d67d1 4de79ba963c9]
	I0702 21:43:48.764742    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:43:48.777365    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:43:48.777440    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:43:48.788072    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:43:48.788145    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:43:48.799264    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:43:48.799339    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:43:48.814662    8914 logs.go:276] 0 containers: []
	W0702 21:43:48.814675    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:43:48.814734    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:43:48.825026    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:43:48.825044    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:43:48.825048    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:43:48.837001    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:43:48.837012    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:43:48.848369    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:43:48.848380    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:43:48.884066    8914 logs.go:123] Gathering logs for coredns [45017d44a390] ...
	I0702 21:43:48.884076    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45017d44a390"
	I0702 21:43:48.895217    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:43:48.895230    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:43:48.922024    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:43:48.922033    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:43:48.926763    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:43:48.926772    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:43:48.942383    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:43:48.942394    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:43:48.953873    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:43:48.953884    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:43:48.964694    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:43:48.964705    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:43:48.976081    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:43:48.976090    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:43:49.009913    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:43:49.009924    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:43:49.029290    8914 logs.go:123] Gathering logs for coredns [bfd1c845f987] ...
	I0702 21:43:49.029300    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd1c845f987"
	I0702 21:43:49.040415    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:43:49.040425    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:43:49.054964    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:43:49.054975    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:43:51.582894    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:43:56.585553    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:43:56.585990    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:43:56.627305    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:43:56.627438    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:43:56.651870    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:43:56.652005    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:43:56.666698    8914 logs.go:276] 4 containers: [45017d44a390 bfd1c845f987 a988211d67d1 4de79ba963c9]
	I0702 21:43:56.666775    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:43:56.679505    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:43:56.679568    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:43:56.690267    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:43:56.690330    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:43:56.701655    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:43:56.701718    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:43:56.712681    8914 logs.go:276] 0 containers: []
	W0702 21:43:56.712692    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:43:56.712755    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:43:56.723231    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:43:56.723248    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:43:56.723253    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:43:56.737369    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:43:56.737381    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:43:56.752535    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:43:56.752544    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:43:56.764178    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:43:56.764191    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:43:56.785565    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:43:56.785576    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:43:56.800552    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:43:56.800565    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:43:56.835796    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:43:56.835808    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:43:56.850455    8914 logs.go:123] Gathering logs for coredns [bfd1c845f987] ...
	I0702 21:43:56.850468    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd1c845f987"
	I0702 21:43:56.868364    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:43:56.868375    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:43:56.879484    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:43:56.879495    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:43:56.890765    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:43:56.890776    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:43:56.927365    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:43:56.927374    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:43:56.931601    8914 logs.go:123] Gathering logs for coredns [45017d44a390] ...
	I0702 21:43:56.931608    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45017d44a390"
	I0702 21:43:56.943048    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:43:56.943057    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:43:56.967468    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:43:56.967476    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:43:59.480805    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:44:04.483403    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:44:04.483794    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:44:04.516417    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:44:04.516561    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:44:04.536300    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:44:04.536410    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:44:04.551637    8914 logs.go:276] 4 containers: [45017d44a390 bfd1c845f987 a988211d67d1 4de79ba963c9]
	I0702 21:44:04.551724    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:44:04.569608    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:44:04.569680    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:44:04.580309    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:44:04.580380    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:44:04.590865    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:44:04.590937    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:44:04.601124    8914 logs.go:276] 0 containers: []
	W0702 21:44:04.601138    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:44:04.601200    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:44:04.615515    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:44:04.615533    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:44:04.615539    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:44:04.638987    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:44:04.638998    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:44:04.650202    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:44:04.650214    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:44:04.688961    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:44:04.688972    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:44:04.725044    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:44:04.725060    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:44:04.739717    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:44:04.739727    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:44:04.751617    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:44:04.751629    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:44:04.765518    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:44:04.765529    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:44:04.777272    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:44:04.777286    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:44:04.789344    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:44:04.789354    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:44:04.794008    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:44:04.794015    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:44:04.808057    8914 logs.go:123] Gathering logs for coredns [45017d44a390] ...
	I0702 21:44:04.808066    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45017d44a390"
	I0702 21:44:04.819754    8914 logs.go:123] Gathering logs for coredns [bfd1c845f987] ...
	I0702 21:44:04.819763    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd1c845f987"
	I0702 21:44:04.831007    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:44:04.831017    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:44:04.843209    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:44:04.843224    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:44:07.362520    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:44:12.364808    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:44:12.365197    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:44:12.399724    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:44:12.399849    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:44:12.418362    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:44:12.418455    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:44:12.435855    8914 logs.go:276] 4 containers: [45017d44a390 bfd1c845f987 a988211d67d1 4de79ba963c9]
	I0702 21:44:12.435933    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:44:12.447676    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:44:12.447740    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:44:12.462430    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:44:12.462500    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:44:12.473567    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:44:12.473630    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:44:12.488356    8914 logs.go:276] 0 containers: []
	W0702 21:44:12.488366    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:44:12.488420    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:44:12.500875    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:44:12.500890    8914 logs.go:123] Gathering logs for coredns [45017d44a390] ...
	I0702 21:44:12.500894    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45017d44a390"
	I0702 21:44:12.512606    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:44:12.512617    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:44:12.524487    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:44:12.524498    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:44:12.536323    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:44:12.536334    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:44:12.547945    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:44:12.547958    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:44:12.573084    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:44:12.573094    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:44:12.611009    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:44:12.611019    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:44:12.615294    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:44:12.615301    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:44:12.636273    8914 logs.go:123] Gathering logs for coredns [bfd1c845f987] ...
	I0702 21:44:12.636286    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd1c845f987"
	I0702 21:44:12.647692    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:44:12.647704    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:44:12.662279    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:44:12.662290    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:44:12.679992    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:44:12.680003    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:44:12.691307    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:44:12.691319    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:44:12.702804    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:44:12.702815    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:44:12.716943    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:44:12.716956    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:44:15.256275    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:44:20.258683    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:44:20.258809    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:44:20.272550    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:44:20.272629    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:44:20.283596    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:44:20.283661    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:44:20.294235    8914 logs.go:276] 4 containers: [45017d44a390 bfd1c845f987 a988211d67d1 4de79ba963c9]
	I0702 21:44:20.294298    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:44:20.304798    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:44:20.304869    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:44:20.315493    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:44:20.315563    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:44:20.325974    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:44:20.326048    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:44:20.336404    8914 logs.go:276] 0 containers: []
	W0702 21:44:20.336419    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:44:20.336479    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:44:20.346796    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:44:20.346814    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:44:20.346820    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:44:20.380433    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:44:20.380445    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:44:20.394507    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:44:20.394520    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:44:20.415039    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:44:20.415050    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:44:20.431965    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:44:20.431978    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:44:20.449326    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:44:20.449335    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:44:20.460423    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:44:20.460433    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:44:20.472294    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:44:20.472306    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:44:20.510535    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:44:20.510545    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:44:20.534894    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:44:20.534901    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:44:20.538959    8914 logs.go:123] Gathering logs for coredns [45017d44a390] ...
	I0702 21:44:20.538967    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45017d44a390"
	I0702 21:44:20.550542    8914 logs.go:123] Gathering logs for coredns [bfd1c845f987] ...
	I0702 21:44:20.550557    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd1c845f987"
	I0702 21:44:20.566499    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:44:20.566512    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:44:20.582091    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:44:20.582102    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:44:20.593877    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:44:20.593890    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:44:23.107300    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:44:28.109545    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:44:28.109896    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:44:28.143670    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:44:28.143809    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:44:28.180128    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:44:28.180208    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:44:28.192931    8914 logs.go:276] 4 containers: [45017d44a390 bfd1c845f987 a988211d67d1 4de79ba963c9]
	I0702 21:44:28.193003    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:44:28.204987    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:44:28.205046    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:44:28.215860    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:44:28.215926    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:44:28.227869    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:44:28.227934    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:44:28.238502    8914 logs.go:276] 0 containers: []
	W0702 21:44:28.238513    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:44:28.238563    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:44:28.248731    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:44:28.248749    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:44:28.248753    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:44:28.282550    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:44:28.282559    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:44:28.307446    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:44:28.307455    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:44:28.319194    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:44:28.319204    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:44:28.357633    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:44:28.357646    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:44:28.371846    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:44:28.371858    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:44:28.385629    8914 logs.go:123] Gathering logs for coredns [45017d44a390] ...
	I0702 21:44:28.385642    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45017d44a390"
	I0702 21:44:28.397560    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:44:28.397574    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:44:28.409548    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:44:28.409558    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:44:28.414192    8914 logs.go:123] Gathering logs for coredns [bfd1c845f987] ...
	I0702 21:44:28.414201    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd1c845f987"
	I0702 21:44:28.431442    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:44:28.431457    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:44:28.443151    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:44:28.443164    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:44:28.454564    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:44:28.454578    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:44:28.466485    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:44:28.466498    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:44:28.481507    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:44:28.481519    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:44:31.005276    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:44:36.007982    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:44:36.008420    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:44:36.051890    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:44:36.052025    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:44:36.075310    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:44:36.075427    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:44:36.090820    8914 logs.go:276] 4 containers: [45017d44a390 bfd1c845f987 a988211d67d1 4de79ba963c9]
	I0702 21:44:36.090902    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:44:36.103557    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:44:36.103632    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:44:36.114483    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:44:36.114555    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:44:36.125470    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:44:36.125540    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:44:36.135836    8914 logs.go:276] 0 containers: []
	W0702 21:44:36.135847    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:44:36.135902    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:44:36.146246    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:44:36.146264    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:44:36.146270    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:44:36.150438    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:44:36.150445    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:44:36.163930    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:44:36.163940    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:44:36.175558    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:44:36.175570    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:44:36.214185    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:44:36.214197    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:44:36.228692    8914 logs.go:123] Gathering logs for coredns [bfd1c845f987] ...
	I0702 21:44:36.228703    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd1c845f987"
	I0702 21:44:36.241005    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:44:36.241018    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:44:36.252699    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:44:36.252712    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:44:36.264396    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:44:36.264406    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:44:36.280918    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:44:36.280928    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:44:36.300663    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:44:36.300678    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:44:36.324499    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:44:36.324505    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:44:36.360320    8914 logs.go:123] Gathering logs for coredns [45017d44a390] ...
	I0702 21:44:36.360328    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45017d44a390"
	I0702 21:44:36.376337    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:44:36.376350    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:44:36.387899    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:44:36.387911    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:44:38.909384    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:44:43.911965    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:44:43.912211    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:44:43.937697    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:44:43.937850    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:44:43.955318    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:44:43.955409    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:44:43.968654    8914 logs.go:276] 4 containers: [45017d44a390 bfd1c845f987 a988211d67d1 4de79ba963c9]
	I0702 21:44:43.968730    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:44:43.980330    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:44:43.980393    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:44:43.992369    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:44:43.992439    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:44:44.002972    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:44:44.003043    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:44:44.013326    8914 logs.go:276] 0 containers: []
	W0702 21:44:44.013337    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:44:44.013395    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:44:44.024011    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:44:44.024032    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:44:44.024037    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:44:44.038245    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:44:44.038257    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:44:44.049788    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:44:44.049801    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:44:44.074574    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:44:44.074582    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:44:44.078838    8914 logs.go:123] Gathering logs for coredns [bfd1c845f987] ...
	I0702 21:44:44.078846    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd1c845f987"
	I0702 21:44:44.090359    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:44:44.090369    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:44:44.101831    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:44:44.101842    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:44:44.139441    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:44:44.139450    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:44:44.153965    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:44:44.153977    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:44:44.165861    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:44:44.165874    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:44:44.188152    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:44:44.188162    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:44:44.203012    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:44:44.203025    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:44:44.240981    8914 logs.go:123] Gathering logs for coredns [45017d44a390] ...
	I0702 21:44:44.240994    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45017d44a390"
	I0702 21:44:44.253306    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:44:44.253317    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:44:44.268337    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:44:44.268348    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:44:46.782400    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:44:51.785140    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:44:51.785575    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:44:51.831448    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:44:51.831553    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:44:51.850859    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:44:51.850933    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:44:51.865708    8914 logs.go:276] 4 containers: [45017d44a390 bfd1c845f987 a988211d67d1 4de79ba963c9]
	I0702 21:44:51.865780    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:44:51.876649    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:44:51.876713    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:44:51.887157    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:44:51.887216    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:44:51.897158    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:44:51.897223    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:44:51.910606    8914 logs.go:276] 0 containers: []
	W0702 21:44:51.910619    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:44:51.910673    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:44:51.921482    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:44:51.921500    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:44:51.921507    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:44:51.925816    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:44:51.925823    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:44:51.939725    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:44:51.939740    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:44:51.951192    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:44:51.951205    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:44:51.976057    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:44:51.976065    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:44:51.994684    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:44:51.994699    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:44:52.029142    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:44:52.029155    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:44:52.047550    8914 logs.go:123] Gathering logs for coredns [45017d44a390] ...
	I0702 21:44:52.047565    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45017d44a390"
	I0702 21:44:52.059386    8914 logs.go:123] Gathering logs for coredns [bfd1c845f987] ...
	I0702 21:44:52.059395    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd1c845f987"
	I0702 21:44:52.071203    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:44:52.071213    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:44:52.085574    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:44:52.085583    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:44:52.097000    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:44:52.097014    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:44:52.114028    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:44:52.114043    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:44:52.150759    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:44:52.150770    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:44:52.164143    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:44:52.164157    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:44:54.675752    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:44:59.678319    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:44:59.678780    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:44:59.716179    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:44:59.716317    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:44:59.738874    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:44:59.738989    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:44:59.754344    8914 logs.go:276] 4 containers: [45017d44a390 bfd1c845f987 a988211d67d1 4de79ba963c9]
	I0702 21:44:59.754424    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:44:59.766956    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:44:59.767029    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:44:59.778603    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:44:59.778675    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:44:59.790855    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:44:59.790929    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:44:59.801266    8914 logs.go:276] 0 containers: []
	W0702 21:44:59.801278    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:44:59.801330    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:44:59.812501    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:44:59.812518    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:44:59.812522    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:44:59.850694    8914 logs.go:123] Gathering logs for coredns [45017d44a390] ...
	I0702 21:44:59.850705    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45017d44a390"
	I0702 21:44:59.866560    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:44:59.866573    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:44:59.878698    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:44:59.878712    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:44:59.893319    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:44:59.893331    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:44:59.905535    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:44:59.905544    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:44:59.917199    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:44:59.917213    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:44:59.921559    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:44:59.921567    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:44:59.958596    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:44:59.958609    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:44:59.978100    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:44:59.978111    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:44:59.998035    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:44:59.998046    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:45:00.015414    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:45:00.015428    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:45:00.039136    8914 logs.go:123] Gathering logs for coredns [bfd1c845f987] ...
	I0702 21:45:00.039144    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd1c845f987"
	I0702 21:45:00.051442    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:45:00.051452    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:45:00.062729    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:45:00.062743    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:45:02.576110    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:45:07.578795    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:45:07.579742    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:45:07.620572    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:45:07.620704    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:45:07.643014    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:45:07.643129    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:45:07.658840    8914 logs.go:276] 4 containers: [45017d44a390 bfd1c845f987 a988211d67d1 4de79ba963c9]
	I0702 21:45:07.658927    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:45:07.677956    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:45:07.678024    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:45:07.690621    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:45:07.690689    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:45:07.713164    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:45:07.713237    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:45:07.730668    8914 logs.go:276] 0 containers: []
	W0702 21:45:07.730685    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:45:07.730745    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:45:07.741649    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:45:07.741667    8914 logs.go:123] Gathering logs for coredns [bfd1c845f987] ...
	I0702 21:45:07.741672    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd1c845f987"
	I0702 21:45:07.753168    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:45:07.753179    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:45:07.764900    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:45:07.764913    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:45:07.781345    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:45:07.781357    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:45:07.799363    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:45:07.799374    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:45:07.811024    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:45:07.811036    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:45:07.845438    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:45:07.845449    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:45:07.859911    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:45:07.859925    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:45:07.871385    8914 logs.go:123] Gathering logs for coredns [45017d44a390] ...
	I0702 21:45:07.871398    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45017d44a390"
	I0702 21:45:07.882950    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:45:07.882961    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:45:07.894514    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:45:07.894527    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:45:07.930858    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:45:07.930866    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:45:07.944718    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:45:07.944728    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:45:07.956097    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:45:07.956106    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:45:07.960740    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:45:07.960746    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:45:10.487559    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:45:15.489893    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:45:15.490303    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:45:15.530447    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:45:15.530568    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:45:15.552565    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:45:15.552678    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:45:15.568944    8914 logs.go:276] 4 containers: [45017d44a390 bfd1c845f987 a988211d67d1 4de79ba963c9]
	I0702 21:45:15.569025    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:45:15.581327    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:45:15.581389    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:45:15.592820    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:45:15.592890    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:45:15.603612    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:45:15.603671    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:45:15.614394    8914 logs.go:276] 0 containers: []
	W0702 21:45:15.614407    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:45:15.614469    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:45:15.631813    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:45:15.631829    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:45:15.631834    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:45:15.670262    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:45:15.670270    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:45:15.694692    8914 logs.go:123] Gathering logs for coredns [bfd1c845f987] ...
	I0702 21:45:15.694699    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd1c845f987"
	I0702 21:45:15.706448    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:45:15.706460    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:45:15.718508    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:45:15.718517    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:45:15.737024    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:45:15.737036    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:45:15.741410    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:45:15.741419    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:45:15.755485    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:45:15.755496    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:45:15.767551    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:45:15.767565    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:45:15.785429    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:45:15.785440    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:45:15.798923    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:45:15.798936    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:45:15.810764    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:45:15.810777    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:45:15.845506    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:45:15.845518    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:45:15.864395    8914 logs.go:123] Gathering logs for coredns [45017d44a390] ...
	I0702 21:45:15.864408    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45017d44a390"
	I0702 21:45:15.876766    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:45:15.876776    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:45:18.389948    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:45:23.392708    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:45:23.393181    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0702 21:45:23.433478    8914 logs.go:276] 1 containers: [ba74ee05ff1b]
	I0702 21:45:23.433620    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0702 21:45:23.453106    8914 logs.go:276] 1 containers: [3ba1dd32ac9c]
	I0702 21:45:23.453195    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0702 21:45:23.467015    8914 logs.go:276] 4 containers: [45017d44a390 bfd1c845f987 a988211d67d1 4de79ba963c9]
	I0702 21:45:23.467082    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0702 21:45:23.478983    8914 logs.go:276] 1 containers: [ff2acf052fc5]
	I0702 21:45:23.479047    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0702 21:45:23.489205    8914 logs.go:276] 1 containers: [00212283f46b]
	I0702 21:45:23.489271    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0702 21:45:23.499713    8914 logs.go:276] 1 containers: [7290ed424321]
	I0702 21:45:23.499777    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0702 21:45:23.510244    8914 logs.go:276] 0 containers: []
	W0702 21:45:23.510256    8914 logs.go:278] No container was found matching "kindnet"
	I0702 21:45:23.510308    8914 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0702 21:45:23.520711    8914 logs.go:276] 1 containers: [8e17643d742f]
	I0702 21:45:23.520727    8914 logs.go:123] Gathering logs for coredns [4de79ba963c9] ...
	I0702 21:45:23.520733    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4de79ba963c9"
	I0702 21:45:23.532980    8914 logs.go:123] Gathering logs for container status ...
	I0702 21:45:23.532989    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0702 21:45:23.544528    8914 logs.go:123] Gathering logs for dmesg ...
	I0702 21:45:23.544540    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0702 21:45:23.548807    8914 logs.go:123] Gathering logs for kube-proxy [00212283f46b] ...
	I0702 21:45:23.548814    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00212283f46b"
	I0702 21:45:23.561179    8914 logs.go:123] Gathering logs for kube-scheduler [ff2acf052fc5] ...
	I0702 21:45:23.561191    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff2acf052fc5"
	I0702 21:45:23.576616    8914 logs.go:123] Gathering logs for coredns [bfd1c845f987] ...
	I0702 21:45:23.576628    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd1c845f987"
	I0702 21:45:23.588804    8914 logs.go:123] Gathering logs for coredns [a988211d67d1] ...
	I0702 21:45:23.588815    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a988211d67d1"
	I0702 21:45:23.600666    8914 logs.go:123] Gathering logs for kube-controller-manager [7290ed424321] ...
	I0702 21:45:23.600677    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7290ed424321"
	I0702 21:45:23.618160    8914 logs.go:123] Gathering logs for storage-provisioner [8e17643d742f] ...
	I0702 21:45:23.618170    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e17643d742f"
	I0702 21:45:23.629921    8914 logs.go:123] Gathering logs for Docker ...
	I0702 21:45:23.629933    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0702 21:45:23.653748    8914 logs.go:123] Gathering logs for kubelet ...
	I0702 21:45:23.653754    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0702 21:45:23.691609    8914 logs.go:123] Gathering logs for kube-apiserver [ba74ee05ff1b] ...
	I0702 21:45:23.691617    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba74ee05ff1b"
	I0702 21:45:23.706508    8914 logs.go:123] Gathering logs for etcd [3ba1dd32ac9c] ...
	I0702 21:45:23.706516    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba1dd32ac9c"
	I0702 21:45:23.720827    8914 logs.go:123] Gathering logs for coredns [45017d44a390] ...
	I0702 21:45:23.720840    8914 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45017d44a390"
	I0702 21:45:23.732146    8914 logs.go:123] Gathering logs for describe nodes ...
	I0702 21:45:23.732156    8914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0702 21:45:26.268503    8914 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0702 21:45:31.271295    8914 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0702 21:45:31.277505    8914 out.go:177] 
	W0702 21:45:31.280389    8914 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0702 21:45:31.280418    8914 out.go:239] * 
	* 
	W0702 21:45:31.282716    8914 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:45:31.288445    8914 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-896000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-152000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-152000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.900953541s)

                                                
                                                
-- stdout --
	* [old-k8s-version-152000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-152000" primary control-plane node in "old-k8s-version-152000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-152000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:40:15.165498    9073 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:40:15.165647    9073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:15.165652    9073 out.go:304] Setting ErrFile to fd 2...
	I0702 21:40:15.165654    9073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:15.165792    9073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:40:15.167022    9073 out.go:298] Setting JSON to false
	I0702 21:40:15.184157    9073 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5984,"bootTime":1719975631,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:40:15.184221    9073 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:40:15.189212    9073 out.go:177] * [old-k8s-version-152000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:40:15.196174    9073 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:40:15.196222    9073 notify.go:220] Checking for updates...
	I0702 21:40:15.203092    9073 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:40:15.206169    9073 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:40:15.209169    9073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:40:15.212113    9073 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:40:15.215163    9073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:40:15.218415    9073 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:40:15.218488    9073 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:40:15.218541    9073 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:40:15.223095    9073 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:40:15.230092    9073 start.go:297] selected driver: qemu2
	I0702 21:40:15.230098    9073 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:40:15.230104    9073 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:40:15.232514    9073 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:40:15.236067    9073 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:40:15.239227    9073 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:40:15.239254    9073 cni.go:84] Creating CNI manager for ""
	I0702 21:40:15.239261    9073 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0702 21:40:15.239291    9073 start.go:340] cluster config:
	{Name:old-k8s-version-152000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:40:15.243233    9073 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:15.252143    9073 out.go:177] * Starting "old-k8s-version-152000" primary control-plane node in "old-k8s-version-152000" cluster
	I0702 21:40:15.256048    9073 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0702 21:40:15.256068    9073 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0702 21:40:15.256081    9073 cache.go:56] Caching tarball of preloaded images
	I0702 21:40:15.256164    9073 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:40:15.256170    9073 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0702 21:40:15.256241    9073 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/old-k8s-version-152000/config.json ...
	I0702 21:40:15.256252    9073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/old-k8s-version-152000/config.json: {Name:mk0c48835f12dc5e948a3fa09cfedb5b4cc915f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:40:15.256480    9073 start.go:360] acquireMachinesLock for old-k8s-version-152000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:40:15.256513    9073 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "old-k8s-version-152000"
	I0702 21:40:15.256527    9073 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:40:15.256561    9073 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:40:15.260103    9073 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:40:15.284822    9073 start.go:159] libmachine.API.Create for "old-k8s-version-152000" (driver="qemu2")
	I0702 21:40:15.284853    9073 client.go:168] LocalClient.Create starting
	I0702 21:40:15.284932    9073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:40:15.284962    9073 main.go:141] libmachine: Decoding PEM data...
	I0702 21:40:15.284976    9073 main.go:141] libmachine: Parsing certificate...
	I0702 21:40:15.285023    9073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:40:15.285045    9073 main.go:141] libmachine: Decoding PEM data...
	I0702 21:40:15.285053    9073 main.go:141] libmachine: Parsing certificate...
	I0702 21:40:15.285370    9073 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:40:15.413429    9073 main.go:141] libmachine: Creating SSH key...
	I0702 21:40:15.630694    9073 main.go:141] libmachine: Creating Disk image...
	I0702 21:40:15.630703    9073 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:40:15.630928    9073 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2
	I0702 21:40:15.641219    9073 main.go:141] libmachine: STDOUT: 
	I0702 21:40:15.641237    9073 main.go:141] libmachine: STDERR: 
	I0702 21:40:15.641300    9073 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2 +20000M
	I0702 21:40:15.649268    9073 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:40:15.649286    9073 main.go:141] libmachine: STDERR: 
	I0702 21:40:15.649300    9073 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2
	I0702 21:40:15.649304    9073 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:40:15.649339    9073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b4:a6:5e:9e:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2
	I0702 21:40:15.651151    9073 main.go:141] libmachine: STDOUT: 
	I0702 21:40:15.651166    9073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:40:15.651185    9073 client.go:171] duration metric: took 366.33375ms to LocalClient.Create
	I0702 21:40:17.653463    9073 start.go:128] duration metric: took 2.396914333s to createHost
	I0702 21:40:17.653547    9073 start.go:83] releasing machines lock for "old-k8s-version-152000", held for 2.397071209s
	W0702 21:40:17.653605    9073 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:40:17.667049    9073 out.go:177] * Deleting "old-k8s-version-152000" in qemu2 ...
	W0702 21:40:17.690039    9073 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:40:17.690076    9073 start.go:728] Will try again in 5 seconds ...
	I0702 21:40:22.692122    9073 start.go:360] acquireMachinesLock for old-k8s-version-152000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:40:22.692427    9073 start.go:364] duration metric: took 239.25µs to acquireMachinesLock for "old-k8s-version-152000"
	I0702 21:40:22.692476    9073 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:40:22.692569    9073 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:40:22.700907    9073 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:40:22.737654    9073 start.go:159] libmachine.API.Create for "old-k8s-version-152000" (driver="qemu2")
	I0702 21:40:22.737701    9073 client.go:168] LocalClient.Create starting
	I0702 21:40:22.737815    9073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:40:22.737875    9073 main.go:141] libmachine: Decoding PEM data...
	I0702 21:40:22.737889    9073 main.go:141] libmachine: Parsing certificate...
	I0702 21:40:22.737947    9073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:40:22.737986    9073 main.go:141] libmachine: Decoding PEM data...
	I0702 21:40:22.737996    9073 main.go:141] libmachine: Parsing certificate...
	I0702 21:40:22.738532    9073 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:40:22.871327    9073 main.go:141] libmachine: Creating SSH key...
	I0702 21:40:22.984308    9073 main.go:141] libmachine: Creating Disk image...
	I0702 21:40:22.984317    9073 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:40:22.984521    9073 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2
	I0702 21:40:22.994018    9073 main.go:141] libmachine: STDOUT: 
	I0702 21:40:22.994043    9073 main.go:141] libmachine: STDERR: 
	I0702 21:40:22.994087    9073 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2 +20000M
	I0702 21:40:23.002139    9073 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:40:23.002162    9073 main.go:141] libmachine: STDERR: 
	I0702 21:40:23.002177    9073 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2
	I0702 21:40:23.002202    9073 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:40:23.002237    9073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:c3:57:84:63:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2
	I0702 21:40:23.003912    9073 main.go:141] libmachine: STDOUT: 
	I0702 21:40:23.003932    9073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:40:23.003944    9073 client.go:171] duration metric: took 266.242916ms to LocalClient.Create
	I0702 21:40:25.006076    9073 start.go:128] duration metric: took 2.313510792s to createHost
	I0702 21:40:25.006104    9073 start.go:83] releasing machines lock for "old-k8s-version-152000", held for 2.313708625s
	W0702 21:40:25.006258    9073 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-152000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-152000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:40:25.014606    9073 out.go:177] 
	W0702 21:40:25.018667    9073 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:40:25.018676    9073 out.go:239] * 
	* 
	W0702 21:40:25.019640    9073 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:40:25.027644    9073 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-152000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000: exit status 7 (44.382666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-152000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-152000 create -f testdata/busybox.yaml: exit status 1 (28.184542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-152000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-152000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000: exit status 7 (29.22375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-152000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000: exit status 7 (28.601542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-152000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-152000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-152000 describe deploy/metrics-server -n kube-system: exit status 1 (26.540541ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-152000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-152000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000: exit status 7 (30.907541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-152000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-152000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.179064083s)

                                                
                                                
-- stdout --
	* [old-k8s-version-152000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-152000" primary control-plane node in "old-k8s-version-152000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-152000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-152000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:40:28.287080    9126 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:40:28.287214    9126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:28.287218    9126 out.go:304] Setting ErrFile to fd 2...
	I0702 21:40:28.287221    9126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:28.287351    9126 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:40:28.288354    9126 out.go:298] Setting JSON to false
	I0702 21:40:28.305390    9126 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5997,"bootTime":1719975631,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:40:28.305487    9126 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:40:28.307286    9126 out.go:177] * [old-k8s-version-152000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:40:28.315372    9126 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:40:28.315428    9126 notify.go:220] Checking for updates...
	I0702 21:40:28.322260    9126 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:40:28.325331    9126 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:40:28.328243    9126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:40:28.331272    9126 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:40:28.334338    9126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:40:28.337649    9126 config.go:182] Loaded profile config "old-k8s-version-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0702 21:40:28.341249    9126 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0702 21:40:28.344324    9126 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:40:28.348283    9126 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:40:28.357260    9126 start.go:297] selected driver: qemu2
	I0702 21:40:28.357265    9126 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:40:28.357314    9126 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:40:28.359670    9126 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:40:28.359693    9126 cni.go:84] Creating CNI manager for ""
	I0702 21:40:28.359701    9126 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0702 21:40:28.359726    9126 start.go:340] cluster config:
	{Name:old-k8s-version-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-152000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:40:28.363417    9126 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:28.371255    9126 out.go:177] * Starting "old-k8s-version-152000" primary control-plane node in "old-k8s-version-152000" cluster
	I0702 21:40:28.374241    9126 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0702 21:40:28.374255    9126 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0702 21:40:28.374261    9126 cache.go:56] Caching tarball of preloaded images
	I0702 21:40:28.374321    9126 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:40:28.374326    9126 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0702 21:40:28.374383    9126 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/old-k8s-version-152000/config.json ...
	I0702 21:40:28.374724    9126 start.go:360] acquireMachinesLock for old-k8s-version-152000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:40:28.374758    9126 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "old-k8s-version-152000"
	I0702 21:40:28.374769    9126 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:40:28.374774    9126 fix.go:54] fixHost starting: 
	I0702 21:40:28.374893    9126 fix.go:112] recreateIfNeeded on old-k8s-version-152000: state=Stopped err=<nil>
	W0702 21:40:28.374901    9126 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:40:28.379327    9126 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-152000" ...
	I0702 21:40:28.387255    9126 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:c3:57:84:63:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2
	I0702 21:40:28.389458    9126 main.go:141] libmachine: STDOUT: 
	I0702 21:40:28.389516    9126 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:40:28.389543    9126 fix.go:56] duration metric: took 14.767792ms for fixHost
	I0702 21:40:28.389548    9126 start.go:83] releasing machines lock for "old-k8s-version-152000", held for 14.78525ms
	W0702 21:40:28.389555    9126 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:40:28.389587    9126 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:40:28.389591    9126 start.go:728] Will try again in 5 seconds ...
	I0702 21:40:33.391702    9126 start.go:360] acquireMachinesLock for old-k8s-version-152000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:40:33.392002    9126 start.go:364] duration metric: took 221.458µs to acquireMachinesLock for "old-k8s-version-152000"
	I0702 21:40:33.392097    9126 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:40:33.392107    9126 fix.go:54] fixHost starting: 
	I0702 21:40:33.392577    9126 fix.go:112] recreateIfNeeded on old-k8s-version-152000: state=Stopped err=<nil>
	W0702 21:40:33.392594    9126 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:40:33.399757    9126 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-152000" ...
	I0702 21:40:33.402946    9126 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:c3:57:84:63:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/old-k8s-version-152000/disk.qcow2
	I0702 21:40:33.407167    9126 main.go:141] libmachine: STDOUT: 
	I0702 21:40:33.407196    9126 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:40:33.407231    9126 fix.go:56] duration metric: took 15.126292ms for fixHost
	I0702 21:40:33.407241    9126 start.go:83] releasing machines lock for "old-k8s-version-152000", held for 15.218542ms
	W0702 21:40:33.407312    9126 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-152000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-152000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:40:33.415721    9126 out.go:177] 
	W0702 21:40:33.418901    9126 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:40:33.418913    9126 out.go:239] * 
	* 
	W0702 21:40:33.419722    9126 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:40:33.428896    9126 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-152000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000: exit status 7 (31.600708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-152000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000: exit status 7 (30.088208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-152000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-152000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-152000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.6135ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-152000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-152000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000: exit status 7 (28.310167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-152000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000: exit status 7 (28.918625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-152000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-152000 --alsologtostderr -v=1: exit status 83 (39.750208ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-152000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-152000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:40:33.650964    9147 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:40:33.651313    9147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:33.651321    9147 out.go:304] Setting ErrFile to fd 2...
	I0702 21:40:33.651324    9147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:33.651470    9147 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:40:33.651692    9147 out.go:298] Setting JSON to false
	I0702 21:40:33.651704    9147 mustload.go:65] Loading cluster: old-k8s-version-152000
	I0702 21:40:33.651924    9147 config.go:182] Loaded profile config "old-k8s-version-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0702 21:40:33.655707    9147 out.go:177] * The control-plane node old-k8s-version-152000 host is not running: state=Stopped
	I0702 21:40:33.659657    9147 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-152000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-152000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000: exit status 7 (28.8945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-152000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000: exit status 7 (29.958209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-152000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-639000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-639000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (9.96025275s)

                                                
                                                
-- stdout --
	* [no-preload-639000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-639000" primary control-plane node in "no-preload-639000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-639000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:40:33.955330    9164 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:40:33.955479    9164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:33.955484    9164 out.go:304] Setting ErrFile to fd 2...
	I0702 21:40:33.955486    9164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:33.955619    9164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:40:33.956702    9164 out.go:298] Setting JSON to false
	I0702 21:40:33.973383    9164 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6002,"bootTime":1719975631,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:40:33.973456    9164 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:40:33.976705    9164 out.go:177] * [no-preload-639000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:40:33.983632    9164 notify.go:220] Checking for updates...
	I0702 21:40:33.987653    9164 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:40:33.990659    9164 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:40:33.993571    9164 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:40:33.996627    9164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:40:33.999671    9164 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:40:34.002642    9164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:40:34.005923    9164 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:40:34.005986    9164 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:40:34.006028    9164 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:40:34.009660    9164 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:40:34.016591    9164 start.go:297] selected driver: qemu2
	I0702 21:40:34.016601    9164 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:40:34.016609    9164 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:40:34.018816    9164 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:40:34.021609    9164 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:40:34.024582    9164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:40:34.024597    9164 cni.go:84] Creating CNI manager for ""
	I0702 21:40:34.024608    9164 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:40:34.024617    9164 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:40:34.024643    9164 start.go:340] cluster config:
	{Name:no-preload-639000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:no-preload-639000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:40:34.028070    9164 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:34.036649    9164 out.go:177] * Starting "no-preload-639000" primary control-plane node in "no-preload-639000" cluster
	I0702 21:40:34.040638    9164 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:40:34.040727    9164 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/no-preload-639000/config.json ...
	I0702 21:40:34.040743    9164 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/no-preload-639000/config.json: {Name:mk3ec8d3ef230cda8358330af5237b1a42bcaa50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:40:34.040769    9164 cache.go:107] acquiring lock: {Name:mk238b4aebfc652293d7d4096b6761d9a2ddeb9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:34.040768    9164 cache.go:107] acquiring lock: {Name:mk62dfac4b7a4125b3e66801151e1518d222f412 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:34.040775    9164 cache.go:107] acquiring lock: {Name:mkd0a61ea876d5a98644ab6eb430a421e8174fa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:34.040791    9164 cache.go:107] acquiring lock: {Name:mka67906a254c5c2a14ca0a0aa8599f6055da4b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:34.040800    9164 cache.go:107] acquiring lock: {Name:mk1b537b55d18a29daf16094b5312f3d62d7b353 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:34.040790    9164 cache.go:107] acquiring lock: {Name:mked88dea50f2ac3ac175c9506d8019c59bca921 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:34.040801    9164 cache.go:107] acquiring lock: {Name:mk44fbc0d8a19004417d3e4fae3a1ec8cd2ad269 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:34.040806    9164 cache.go:107] acquiring lock: {Name:mk2f69b7d4118fb694674508f6b6fc9e0ead8295 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:34.040885    9164 cache.go:115] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0702 21:40:34.040893    9164 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 125.542µs
	I0702 21:40:34.040898    9164 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0702 21:40:34.041005    9164 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0702 21:40:34.041033    9164 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0702 21:40:34.041092    9164 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0702 21:40:34.041141    9164 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0702 21:40:34.041176    9164 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0702 21:40:34.041226    9164 start.go:360] acquireMachinesLock for no-preload-639000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:40:34.041233    9164 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0702 21:40:34.041261    9164 start.go:364] duration metric: took 25.541µs to acquireMachinesLock for "no-preload-639000"
	I0702 21:40:34.041314    9164 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0702 21:40:34.041277    9164 start.go:93] Provisioning new machine with config: &{Name:no-preload-639000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:no-preload-639000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:40:34.041326    9164 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:40:34.045525    9164 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:40:34.053460    9164 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0702 21:40:34.053545    9164 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0702 21:40:34.054414    9164 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0702 21:40:34.056616    9164 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0702 21:40:34.056642    9164 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0702 21:40:34.056709    9164 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0702 21:40:34.056727    9164 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0702 21:40:34.060977    9164 start.go:159] libmachine.API.Create for "no-preload-639000" (driver="qemu2")
	I0702 21:40:34.060994    9164 client.go:168] LocalClient.Create starting
	I0702 21:40:34.061081    9164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:40:34.061114    9164 main.go:141] libmachine: Decoding PEM data...
	I0702 21:40:34.061121    9164 main.go:141] libmachine: Parsing certificate...
	I0702 21:40:34.061162    9164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:40:34.061185    9164 main.go:141] libmachine: Decoding PEM data...
	I0702 21:40:34.061189    9164 main.go:141] libmachine: Parsing certificate...
	I0702 21:40:34.061538    9164 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:40:34.195663    9164 main.go:141] libmachine: Creating SSH key...
	I0702 21:40:34.265198    9164 main.go:141] libmachine: Creating Disk image...
	I0702 21:40:34.265216    9164 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:40:34.265423    9164 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2
	I0702 21:40:34.277273    9164 main.go:141] libmachine: STDOUT: 
	I0702 21:40:34.277295    9164 main.go:141] libmachine: STDERR: 
	I0702 21:40:34.277356    9164 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2 +20000M
	I0702 21:40:34.287357    9164 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:40:34.287372    9164 main.go:141] libmachine: STDERR: 
	I0702 21:40:34.287387    9164 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2
	I0702 21:40:34.287391    9164 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:40:34.287420    9164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ff:05:77:d9:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2
	I0702 21:40:34.289846    9164 main.go:141] libmachine: STDOUT: 
	I0702 21:40:34.289882    9164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:40:34.289938    9164 client.go:171] duration metric: took 228.944041ms to LocalClient.Create
	I0702 21:40:34.445910    9164 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0702 21:40:34.452443    9164 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2
	I0702 21:40:34.470406    9164 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0702 21:40:34.481800    9164 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0702 21:40:34.490137    9164 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0702 21:40:34.528780    9164 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2
	I0702 21:40:34.575456    9164 cache.go:162] opening:  /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2
	I0702 21:40:34.592805    9164 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0702 21:40:34.592817    9164 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 552.036041ms
	I0702 21:40:34.592824    9164 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0702 21:40:36.290137    9164 start.go:128] duration metric: took 2.248833333s to createHost
	I0702 21:40:36.290178    9164 start.go:83] releasing machines lock for "no-preload-639000", held for 2.2489535s
	W0702 21:40:36.290216    9164 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:40:36.295637    9164 out.go:177] * Deleting "no-preload-639000" in qemu2 ...
	W0702 21:40:36.315154    9164 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:40:36.315171    9164 start.go:728] Will try again in 5 seconds ...
	I0702 21:40:37.523000    9164 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 exists
	I0702 21:40:37.523027    9164 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.2" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2" took 3.48232675s
	I0702 21:40:37.523037    9164 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.2 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 succeeded
	I0702 21:40:37.667808    9164 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0702 21:40:37.667821    9164 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.627085666s
	I0702 21:40:37.667830    9164 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0702 21:40:38.164315    9164 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 exists
	I0702 21:40:38.164364    9164 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.2" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2" took 4.123640542s
	I0702 21:40:38.164380    9164 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.2 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 succeeded
	I0702 21:40:38.909939    9164 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 exists
	I0702 21:40:38.910001    9164 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.2" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2" took 4.869306334s
	I0702 21:40:38.910033    9164 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.2 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 succeeded
	I0702 21:40:39.166765    9164 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 exists
	I0702 21:40:39.166788    9164 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.2" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2" took 5.126135917s
	I0702 21:40:39.166800    9164 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.2 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 succeeded
	I0702 21:40:41.198347    9164 cache.go:157] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0702 21:40:41.198381    9164 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 7.157718208s
	I0702 21:40:41.198398    9164 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0702 21:40:41.198416    9164 cache.go:87] Successfully saved all images to host disk.
	I0702 21:40:41.317244    9164 start.go:360] acquireMachinesLock for no-preload-639000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:40:41.317740    9164 start.go:364] duration metric: took 420.209µs to acquireMachinesLock for "no-preload-639000"
	I0702 21:40:41.317866    9164 start.go:93] Provisioning new machine with config: &{Name:no-preload-639000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:no-preload-639000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:40:41.318133    9164 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:40:41.328726    9164 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:40:41.378584    9164 start.go:159] libmachine.API.Create for "no-preload-639000" (driver="qemu2")
	I0702 21:40:41.378635    9164 client.go:168] LocalClient.Create starting
	I0702 21:40:41.378757    9164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:40:41.378835    9164 main.go:141] libmachine: Decoding PEM data...
	I0702 21:40:41.378857    9164 main.go:141] libmachine: Parsing certificate...
	I0702 21:40:41.378918    9164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:40:41.378984    9164 main.go:141] libmachine: Decoding PEM data...
	I0702 21:40:41.379011    9164 main.go:141] libmachine: Parsing certificate...
	I0702 21:40:41.379555    9164 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:40:41.520024    9164 main.go:141] libmachine: Creating SSH key...
	I0702 21:40:41.819855    9164 main.go:141] libmachine: Creating Disk image...
	I0702 21:40:41.819870    9164 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:40:41.820073    9164 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2
	I0702 21:40:41.829878    9164 main.go:141] libmachine: STDOUT: 
	I0702 21:40:41.829928    9164 main.go:141] libmachine: STDERR: 
	I0702 21:40:41.829984    9164 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2 +20000M
	I0702 21:40:41.838265    9164 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:40:41.838308    9164 main.go:141] libmachine: STDERR: 
	I0702 21:40:41.838325    9164 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2
	I0702 21:40:41.838329    9164 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:40:41.838374    9164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:ac:3c:d1:04:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2
	I0702 21:40:41.840162    9164 main.go:141] libmachine: STDOUT: 
	I0702 21:40:41.840192    9164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:40:41.840205    9164 client.go:171] duration metric: took 461.575542ms to LocalClient.Create
	I0702 21:40:43.842561    9164 start.go:128] duration metric: took 2.524379459s to createHost
	I0702 21:40:43.842696    9164 start.go:83] releasing machines lock for "no-preload-639000", held for 2.524976375s
	W0702 21:40:43.843102    9164 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-639000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-639000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:40:43.855677    9164 out.go:177] 
	W0702 21:40:43.858715    9164 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:40:43.858748    9164 out.go:239] * 
	* 
	W0702 21:40:43.861198    9164 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:40:43.875594    9164 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-639000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000: exit status 7 (62.052583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-639000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-639000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-639000 create -f testdata/busybox.yaml: exit status 1 (31.478667ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-639000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-639000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000: exit status 7 (30.983625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-639000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000: exit status 7 (29.273ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-639000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-639000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-639000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-639000 describe deploy/metrics-server -n kube-system: exit status 1 (26.9015ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-639000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-639000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000: exit status 7 (29.799541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-639000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-639000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-639000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.177029167s)

                                                
                                                
-- stdout --
	* [no-preload-639000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-639000" primary control-plane node in "no-preload-639000" cluster
	* Restarting existing qemu2 VM for "no-preload-639000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-639000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:40:46.422009    9240 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:40:46.422133    9240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:46.422137    9240 out.go:304] Setting ErrFile to fd 2...
	I0702 21:40:46.422140    9240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:46.422260    9240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:40:46.423327    9240 out.go:298] Setting JSON to false
	I0702 21:40:46.439678    9240 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6015,"bootTime":1719975631,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:40:46.439745    9240 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:40:46.444163    9240 out.go:177] * [no-preload-639000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:40:46.451116    9240 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:40:46.451146    9240 notify.go:220] Checking for updates...
	I0702 21:40:46.455147    9240 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:40:46.458155    9240 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:40:46.461829    9240 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:40:46.466047    9240 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:40:46.469140    9240 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:40:46.470690    9240 config.go:182] Loaded profile config "no-preload-639000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:40:46.470947    9240 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:40:46.475107    9240 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:40:46.481990    9240 start.go:297] selected driver: qemu2
	I0702 21:40:46.482003    9240 start.go:901] validating driver "qemu2" against &{Name:no-preload-639000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:no-preload-639000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:40:46.482068    9240 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:40:46.484468    9240 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:40:46.484529    9240 cni.go:84] Creating CNI manager for ""
	I0702 21:40:46.484537    9240 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:40:46.484564    9240 start.go:340] cluster config:
	{Name:no-preload-639000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:no-preload-639000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:40:46.487977    9240 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:46.496112    9240 out.go:177] * Starting "no-preload-639000" primary control-plane node in "no-preload-639000" cluster
	I0702 21:40:46.500128    9240 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:40:46.500183    9240 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/no-preload-639000/config.json ...
	I0702 21:40:46.500213    9240 cache.go:107] acquiring lock: {Name:mk238b4aebfc652293d7d4096b6761d9a2ddeb9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:46.500219    9240 cache.go:107] acquiring lock: {Name:mked88dea50f2ac3ac175c9506d8019c59bca921 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:46.500228    9240 cache.go:107] acquiring lock: {Name:mka67906a254c5c2a14ca0a0aa8599f6055da4b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:46.500273    9240 cache.go:115] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0702 21:40:46.500278    9240 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 68.666µs
	I0702 21:40:46.500284    9240 cache.go:115] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0702 21:40:46.500288    9240 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 62.458µs
	I0702 21:40:46.500295    9240 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0702 21:40:46.500282    9240 cache.go:107] acquiring lock: {Name:mk62dfac4b7a4125b3e66801151e1518d222f412 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:46.500284    9240 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0702 21:40:46.500293    9240 cache.go:107] acquiring lock: {Name:mk44fbc0d8a19004417d3e4fae3a1ec8cd2ad269 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:46.500349    9240 cache.go:115] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 exists
	I0702 21:40:46.500353    9240 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.2" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2" took 75.125µs
	I0702 21:40:46.500357    9240 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.2 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 succeeded
	I0702 21:40:46.500274    9240 cache.go:115] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 exists
	I0702 21:40:46.500361    9240 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.2" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2" took 151.041µs
	I0702 21:40:46.500297    9240 cache.go:107] acquiring lock: {Name:mkd0a61ea876d5a98644ab6eb430a421e8174fa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:46.500301    9240 cache.go:107] acquiring lock: {Name:mk2f69b7d4118fb694674508f6b6fc9e0ead8295 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:46.500387    9240 cache.go:115] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0702 21:40:46.500391    9240 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 99.167µs
	I0702 21:40:46.500396    9240 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0702 21:40:46.500303    9240 cache.go:107] acquiring lock: {Name:mk1b537b55d18a29daf16094b5312f3d62d7b353 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:46.500401    9240 cache.go:115] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 exists
	I0702 21:40:46.500408    9240 cache.go:115] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0702 21:40:46.500408    9240 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.2" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2" took 111.833µs
	I0702 21:40:46.500413    9240 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.2 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 succeeded
	I0702 21:40:46.500365    9240 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.2 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 succeeded
	I0702 21:40:46.500412    9240 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 111.208µs
	I0702 21:40:46.500417    9240 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0702 21:40:46.500423    9240 cache.go:115] /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 exists
	I0702 21:40:46.500426    9240 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.2" -> "/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2" took 123.459µs
	I0702 21:40:46.500430    9240 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.2 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 succeeded
	I0702 21:40:46.500434    9240 cache.go:87] Successfully saved all images to host disk.
	I0702 21:40:46.500568    9240 start.go:360] acquireMachinesLock for no-preload-639000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:40:46.500599    9240 start.go:364] duration metric: took 25.791µs to acquireMachinesLock for "no-preload-639000"
	I0702 21:40:46.500609    9240 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:40:46.500616    9240 fix.go:54] fixHost starting: 
	I0702 21:40:46.500725    9240 fix.go:112] recreateIfNeeded on no-preload-639000: state=Stopped err=<nil>
	W0702 21:40:46.500733    9240 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:40:46.509024    9240 out.go:177] * Restarting existing qemu2 VM for "no-preload-639000" ...
	I0702 21:40:46.513094    9240 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:ac:3c:d1:04:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2
	I0702 21:40:46.514847    9240 main.go:141] libmachine: STDOUT: 
	I0702 21:40:46.514865    9240 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:40:46.514890    9240 fix.go:56] duration metric: took 14.275333ms for fixHost
	I0702 21:40:46.514895    9240 start.go:83] releasing machines lock for "no-preload-639000", held for 14.29125ms
	W0702 21:40:46.514900    9240 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:40:46.514922    9240 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:40:46.514927    9240 start.go:728] Will try again in 5 seconds ...
	I0702 21:40:51.517018    9240 start.go:360] acquireMachinesLock for no-preload-639000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:40:51.517303    9240 start.go:364] duration metric: took 228.375µs to acquireMachinesLock for "no-preload-639000"
	I0702 21:40:51.517396    9240 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:40:51.517409    9240 fix.go:54] fixHost starting: 
	I0702 21:40:51.517806    9240 fix.go:112] recreateIfNeeded on no-preload-639000: state=Stopped err=<nil>
	W0702 21:40:51.517820    9240 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:40:51.523153    9240 out.go:177] * Restarting existing qemu2 VM for "no-preload-639000" ...
	I0702 21:40:51.532175    9240 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:ac:3c:d1:04:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/no-preload-639000/disk.qcow2
	I0702 21:40:51.537278    9240 main.go:141] libmachine: STDOUT: 
	I0702 21:40:51.537327    9240 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:40:51.537368    9240 fix.go:56] duration metric: took 19.960958ms for fixHost
	I0702 21:40:51.537378    9240 start.go:83] releasing machines lock for "no-preload-639000", held for 20.062083ms
	W0702 21:40:51.537483    9240 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-639000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-639000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:40:51.546200    9240 out.go:177] 
	W0702 21:40:51.549244    9240 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:40:51.549261    9240 out.go:239] * 
	* 
	W0702 21:40:51.550915    9240 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:40:51.560109    9240 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-639000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000: exit status 7 (55.557958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-639000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-639000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000: exit status 7 (31.846875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-639000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-639000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-639000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-639000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.522625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-639000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-639000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000: exit status 7 (29.107459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-639000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-639000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000: exit status 7 (29.480833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-639000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-639000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-639000 --alsologtostderr -v=1: exit status 83 (41.152583ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-639000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-639000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:40:51.810932    9268 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:40:51.811145    9268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:51.811149    9268 out.go:304] Setting ErrFile to fd 2...
	I0702 21:40:51.811151    9268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:51.811294    9268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:40:51.811533    9268 out.go:298] Setting JSON to false
	I0702 21:40:51.811542    9268 mustload.go:65] Loading cluster: no-preload-639000
	I0702 21:40:51.811734    9268 config.go:182] Loaded profile config "no-preload-639000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:40:51.816366    9268 out.go:177] * The control-plane node no-preload-639000 host is not running: state=Stopped
	I0702 21:40:51.820263    9268 out.go:177]   To start a cluster, run: "minikube start -p no-preload-639000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-639000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000: exit status 7 (28.756917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-639000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000: exit status 7 (29.744416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-639000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-167000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-167000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (9.773877958s)

                                                
                                                
-- stdout --
	* [embed-certs-167000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-167000" primary control-plane node in "embed-certs-167000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-167000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:40:52.116605    9286 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:40:52.116723    9286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:52.116727    9286 out.go:304] Setting ErrFile to fd 2...
	I0702 21:40:52.116730    9286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:40:52.116873    9286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:40:52.118228    9286 out.go:298] Setting JSON to false
	I0702 21:40:52.134635    9286 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6021,"bootTime":1719975631,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:40:52.134704    9286 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:40:52.139351    9286 out.go:177] * [embed-certs-167000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:40:52.146259    9286 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:40:52.146275    9286 notify.go:220] Checking for updates...
	I0702 21:40:52.152325    9286 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:40:52.155278    9286 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:40:52.158313    9286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:40:52.161352    9286 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:40:52.164279    9286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:40:52.167702    9286 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:40:52.167768    9286 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:40:52.167838    9286 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:40:52.172201    9286 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:40:52.179239    9286 start.go:297] selected driver: qemu2
	I0702 21:40:52.179246    9286 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:40:52.179252    9286 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:40:52.181564    9286 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:40:52.184221    9286 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:40:52.187312    9286 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:40:52.187342    9286 cni.go:84] Creating CNI manager for ""
	I0702 21:40:52.187350    9286 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:40:52.187354    9286 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:40:52.187376    9286 start.go:340] cluster config:
	{Name:embed-certs-167000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-167000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:40:52.190769    9286 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:40:52.199243    9286 out.go:177] * Starting "embed-certs-167000" primary control-plane node in "embed-certs-167000" cluster
	I0702 21:40:52.203283    9286 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:40:52.203296    9286 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:40:52.203302    9286 cache.go:56] Caching tarball of preloaded images
	I0702 21:40:52.203357    9286 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:40:52.203362    9286 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:40:52.203417    9286 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/embed-certs-167000/config.json ...
	I0702 21:40:52.203428    9286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/embed-certs-167000/config.json: {Name:mkff779f93f66aac0e02bfeacefaf7deb5701d17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:40:52.203743    9286 start.go:360] acquireMachinesLock for embed-certs-167000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:40:52.203776    9286 start.go:364] duration metric: took 27.666µs to acquireMachinesLock for "embed-certs-167000"
	I0702 21:40:52.203789    9286 start.go:93] Provisioning new machine with config: &{Name:embed-certs-167000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:embed-certs-167000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:40:52.203818    9286 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:40:52.212319    9286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:40:52.229365    9286 start.go:159] libmachine.API.Create for "embed-certs-167000" (driver="qemu2")
	I0702 21:40:52.229388    9286 client.go:168] LocalClient.Create starting
	I0702 21:40:52.229460    9286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:40:52.229493    9286 main.go:141] libmachine: Decoding PEM data...
	I0702 21:40:52.229501    9286 main.go:141] libmachine: Parsing certificate...
	I0702 21:40:52.229553    9286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:40:52.229576    9286 main.go:141] libmachine: Decoding PEM data...
	I0702 21:40:52.229584    9286 main.go:141] libmachine: Parsing certificate...
	I0702 21:40:52.229954    9286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:40:52.357739    9286 main.go:141] libmachine: Creating SSH key...
	I0702 21:40:52.461710    9286 main.go:141] libmachine: Creating Disk image...
	I0702 21:40:52.461716    9286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:40:52.461884    9286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2
	I0702 21:40:52.471729    9286 main.go:141] libmachine: STDOUT: 
	I0702 21:40:52.471755    9286 main.go:141] libmachine: STDERR: 
	I0702 21:40:52.471815    9286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2 +20000M
	I0702 21:40:52.480125    9286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:40:52.480138    9286 main.go:141] libmachine: STDERR: 
	I0702 21:40:52.480161    9286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2
	I0702 21:40:52.480165    9286 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:40:52.480196    9286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:13:82:f6:d7:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2
	I0702 21:40:52.481865    9286 main.go:141] libmachine: STDOUT: 
	I0702 21:40:52.481877    9286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:40:52.481902    9286 client.go:171] duration metric: took 252.505083ms to LocalClient.Create
	I0702 21:40:54.483978    9286 start.go:128] duration metric: took 2.280184834s to createHost
	I0702 21:40:54.484027    9286 start.go:83] releasing machines lock for "embed-certs-167000", held for 2.280289583s
	W0702 21:40:54.484058    9286 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:40:54.497988    9286 out.go:177] * Deleting "embed-certs-167000" in qemu2 ...
	W0702 21:40:54.510320    9286 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:40:54.510333    9286 start.go:728] Will try again in 5 seconds ...
	I0702 21:40:59.512375    9286 start.go:360] acquireMachinesLock for embed-certs-167000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:40:59.512599    9286 start.go:364] duration metric: took 170µs to acquireMachinesLock for "embed-certs-167000"
	I0702 21:40:59.512638    9286 start.go:93] Provisioning new machine with config: &{Name:embed-certs-167000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:embed-certs-167000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:40:59.512721    9286 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:40:59.522054    9286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:40:59.544935    9286 start.go:159] libmachine.API.Create for "embed-certs-167000" (driver="qemu2")
	I0702 21:40:59.544965    9286 client.go:168] LocalClient.Create starting
	I0702 21:40:59.545036    9286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:40:59.545073    9286 main.go:141] libmachine: Decoding PEM data...
	I0702 21:40:59.545085    9286 main.go:141] libmachine: Parsing certificate...
	I0702 21:40:59.545125    9286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:40:59.545153    9286 main.go:141] libmachine: Decoding PEM data...
	I0702 21:40:59.545160    9286 main.go:141] libmachine: Parsing certificate...
	I0702 21:40:59.545499    9286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:40:59.688720    9286 main.go:141] libmachine: Creating SSH key...
	I0702 21:40:59.805211    9286 main.go:141] libmachine: Creating Disk image...
	I0702 21:40:59.805216    9286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:40:59.805386    9286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2
	I0702 21:40:59.814941    9286 main.go:141] libmachine: STDOUT: 
	I0702 21:40:59.814956    9286 main.go:141] libmachine: STDERR: 
	I0702 21:40:59.814994    9286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2 +20000M
	I0702 21:40:59.822901    9286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:40:59.822912    9286 main.go:141] libmachine: STDERR: 
	I0702 21:40:59.822922    9286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2
	I0702 21:40:59.822927    9286 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:40:59.822960    9286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4f:09:ca:11:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2
	I0702 21:40:59.824562    9286 main.go:141] libmachine: STDOUT: 
	I0702 21:40:59.824576    9286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:40:59.824589    9286 client.go:171] duration metric: took 279.624917ms to LocalClient.Create
	I0702 21:41:01.826847    9286 start.go:128] duration metric: took 2.314140542s to createHost
	I0702 21:41:01.826933    9286 start.go:83] releasing machines lock for "embed-certs-167000", held for 2.314366917s
	W0702 21:41:01.827321    9286 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-167000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-167000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:01.836032    9286 out.go:177] 
	W0702 21:41:01.840037    9286 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:41:01.840059    9286 out.go:239] * 
	* 
	W0702 21:41:01.842872    9286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:41:01.853881    9286 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-167000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000: exit status 7 (65.032833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-167000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-167000 create -f testdata/busybox.yaml: exit status 1 (30.375834ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-167000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-167000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000: exit status 7 (29.915083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-167000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000: exit status 7 (29.764667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-167000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-167000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-167000 describe deploy/metrics-server -n kube-system: exit status 1 (26.540875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-167000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-167000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000: exit status 7 (29.464833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-167000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-167000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.18718325s)

                                                
                                                
-- stdout --
	* [embed-certs-167000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-167000" primary control-plane node in "embed-certs-167000" cluster
	* Restarting existing qemu2 VM for "embed-certs-167000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-167000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:41:04.007285    9332 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:41:04.007407    9332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:04.007411    9332 out.go:304] Setting ErrFile to fd 2...
	I0702 21:41:04.007414    9332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:04.007546    9332 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:41:04.008594    9332 out.go:298] Setting JSON to false
	I0702 21:41:04.025351    9332 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6033,"bootTime":1719975631,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:41:04.025419    9332 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:41:04.030117    9332 out.go:177] * [embed-certs-167000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:41:04.037011    9332 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:41:04.037066    9332 notify.go:220] Checking for updates...
	I0702 21:41:04.043958    9332 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:41:04.046963    9332 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:41:04.050013    9332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:41:04.052909    9332 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:41:04.056027    9332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:41:04.059345    9332 config.go:182] Loaded profile config "embed-certs-167000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:41:04.059610    9332 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:41:04.063952    9332 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:41:04.071005    9332 start.go:297] selected driver: qemu2
	I0702 21:41:04.071014    9332 start.go:901] validating driver "qemu2" against &{Name:embed-certs-167000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:embed-certs-167000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:41:04.071077    9332 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:41:04.073220    9332 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:41:04.073248    9332 cni.go:84] Creating CNI manager for ""
	I0702 21:41:04.073255    9332 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:41:04.073280    9332 start.go:340] cluster config:
	{Name:embed-certs-167000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-167000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:41:04.076747    9332 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:41:04.092983    9332 out.go:177] * Starting "embed-certs-167000" primary control-plane node in "embed-certs-167000" cluster
	I0702 21:41:04.096975    9332 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:41:04.096988    9332 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:41:04.096994    9332 cache.go:56] Caching tarball of preloaded images
	I0702 21:41:04.097048    9332 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:41:04.097053    9332 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:41:04.097113    9332 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/embed-certs-167000/config.json ...
	I0702 21:41:04.097447    9332 start.go:360] acquireMachinesLock for embed-certs-167000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:41:04.097473    9332 start.go:364] duration metric: took 21µs to acquireMachinesLock for "embed-certs-167000"
	I0702 21:41:04.097489    9332 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:41:04.097492    9332 fix.go:54] fixHost starting: 
	I0702 21:41:04.097603    9332 fix.go:112] recreateIfNeeded on embed-certs-167000: state=Stopped err=<nil>
	W0702 21:41:04.097613    9332 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:41:04.106001    9332 out.go:177] * Restarting existing qemu2 VM for "embed-certs-167000" ...
	I0702 21:41:04.109962    9332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4f:09:ca:11:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2
	I0702 21:41:04.111822    9332 main.go:141] libmachine: STDOUT: 
	I0702 21:41:04.111837    9332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:41:04.111861    9332 fix.go:56] duration metric: took 14.368ms for fixHost
	I0702 21:41:04.111864    9332 start.go:83] releasing machines lock for "embed-certs-167000", held for 14.387041ms
	W0702 21:41:04.111869    9332 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:41:04.111897    9332 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:04.111901    9332 start.go:728] Will try again in 5 seconds ...
	I0702 21:41:09.113974    9332 start.go:360] acquireMachinesLock for embed-certs-167000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:41:09.114351    9332 start.go:364] duration metric: took 307.25µs to acquireMachinesLock for "embed-certs-167000"
	I0702 21:41:09.114403    9332 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:41:09.114416    9332 fix.go:54] fixHost starting: 
	I0702 21:41:09.114914    9332 fix.go:112] recreateIfNeeded on embed-certs-167000: state=Stopped err=<nil>
	W0702 21:41:09.114931    9332 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:41:09.122324    9332 out.go:177] * Restarting existing qemu2 VM for "embed-certs-167000" ...
	I0702 21:41:09.125441    9332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4f:09:ca:11:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/embed-certs-167000/disk.qcow2
	I0702 21:41:09.132559    9332 main.go:141] libmachine: STDOUT: 
	I0702 21:41:09.132625    9332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:41:09.132706    9332 fix.go:56] duration metric: took 18.289875ms for fixHost
	I0702 21:41:09.132721    9332 start.go:83] releasing machines lock for "embed-certs-167000", held for 18.353167ms
	W0702 21:41:09.132884    9332 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-167000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-167000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:09.140201    9332 out.go:177] 
	W0702 21:41:09.143359    9332 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:41:09.143377    9332 out.go:239] * 
	* 
	W0702 21:41:09.145324    9332 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:41:09.153316    9332 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-167000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000: exit status 7 (62.586959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-167000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000: exit status 7 (32.228125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-167000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-167000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-167000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.147583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-167000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-167000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000: exit status 7 (28.561583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-167000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000: exit status 7 (30.436583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-167000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-167000 --alsologtostderr -v=1: exit status 83 (41.073ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-167000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-167000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:41:09.415121    9353 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:41:09.415269    9353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:09.415277    9353 out.go:304] Setting ErrFile to fd 2...
	I0702 21:41:09.415279    9353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:09.415403    9353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:41:09.415640    9353 out.go:298] Setting JSON to false
	I0702 21:41:09.415650    9353 mustload.go:65] Loading cluster: embed-certs-167000
	I0702 21:41:09.415854    9353 config.go:182] Loaded profile config "embed-certs-167000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:41:09.420182    9353 out.go:177] * The control-plane node embed-certs-167000 host is not running: state=Stopped
	I0702 21:41:09.424224    9353 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-167000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-167000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000: exit status 7 (29.6215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-167000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000: exit status 7 (30.990042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-265000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-265000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (9.741894875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-265000" primary control-plane node in "default-k8s-diff-port-265000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-265000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:41:09.822895    9377 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:41:09.823007    9377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:09.823013    9377 out.go:304] Setting ErrFile to fd 2...
	I0702 21:41:09.823015    9377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:09.823187    9377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:41:09.824483    9377 out.go:298] Setting JSON to false
	I0702 21:41:09.840918    9377 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6038,"bootTime":1719975631,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:41:09.840991    9377 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:41:09.846349    9377 out.go:177] * [default-k8s-diff-port-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:41:09.853330    9377 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:41:09.853344    9377 notify.go:220] Checking for updates...
	I0702 21:41:09.859174    9377 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:41:09.862235    9377 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:41:09.865251    9377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:41:09.868232    9377 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:41:09.871283    9377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:41:09.874502    9377 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:41:09.874568    9377 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:41:09.874610    9377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:41:09.877186    9377 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:41:09.884249    9377 start.go:297] selected driver: qemu2
	I0702 21:41:09.884266    9377 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:41:09.884273    9377 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:41:09.886551    9377 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:41:09.887982    9377 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:41:09.891358    9377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:41:09.891397    9377 cni.go:84] Creating CNI manager for ""
	I0702 21:41:09.891406    9377 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:41:09.891410    9377 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:41:09.891449    9377 start.go:340] cluster config:
	{Name:default-k8s-diff-port-265000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-265000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:41:09.894917    9377 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:41:09.903226    9377 out.go:177] * Starting "default-k8s-diff-port-265000" primary control-plane node in "default-k8s-diff-port-265000" cluster
	I0702 21:41:09.907258    9377 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:41:09.907274    9377 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:41:09.907283    9377 cache.go:56] Caching tarball of preloaded images
	I0702 21:41:09.907352    9377 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:41:09.907359    9377 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:41:09.907429    9377 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/default-k8s-diff-port-265000/config.json ...
	I0702 21:41:09.907446    9377 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/default-k8s-diff-port-265000/config.json: {Name:mk248e48f0107d7d479193cb9e27d67a57fd88ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:41:09.907762    9377 start.go:360] acquireMachinesLock for default-k8s-diff-port-265000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:41:09.907799    9377 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "default-k8s-diff-port-265000"
	I0702 21:41:09.907811    9377 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-265000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:41:09.907842    9377 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:41:09.916245    9377 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:41:09.933626    9377 start.go:159] libmachine.API.Create for "default-k8s-diff-port-265000" (driver="qemu2")
	I0702 21:41:09.933648    9377 client.go:168] LocalClient.Create starting
	I0702 21:41:09.933716    9377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:41:09.933748    9377 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:09.933759    9377 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:09.933795    9377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:41:09.933817    9377 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:09.933824    9377 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:09.934172    9377 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:41:10.059026    9377 main.go:141] libmachine: Creating SSH key...
	I0702 21:41:10.089795    9377 main.go:141] libmachine: Creating Disk image...
	I0702 21:41:10.089800    9377 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:41:10.089947    9377 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2
	I0702 21:41:10.099260    9377 main.go:141] libmachine: STDOUT: 
	I0702 21:41:10.099292    9377 main.go:141] libmachine: STDERR: 
	I0702 21:41:10.099342    9377 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2 +20000M
	I0702 21:41:10.107588    9377 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:41:10.107602    9377 main.go:141] libmachine: STDERR: 
	I0702 21:41:10.107624    9377 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2
	I0702 21:41:10.107628    9377 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:41:10.107655    9377 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:ee:9b:3a:56:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2
	I0702 21:41:10.109322    9377 main.go:141] libmachine: STDOUT: 
	I0702 21:41:10.109338    9377 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:41:10.109358    9377 client.go:171] duration metric: took 175.708708ms to LocalClient.Create
	I0702 21:41:12.109804    9377 start.go:128] duration metric: took 2.201970833s to createHost
	I0702 21:41:12.109877    9377 start.go:83] releasing machines lock for "default-k8s-diff-port-265000", held for 2.202111542s
	W0702 21:41:12.109932    9377 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:12.116256    9377 out.go:177] * Deleting "default-k8s-diff-port-265000" in qemu2 ...
	W0702 21:41:12.141445    9377 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:12.141479    9377 start.go:728] Will try again in 5 seconds ...
	I0702 21:41:17.141805    9377 start.go:360] acquireMachinesLock for default-k8s-diff-port-265000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:41:17.142336    9377 start.go:364] duration metric: took 387.834µs to acquireMachinesLock for "default-k8s-diff-port-265000"
	I0702 21:41:17.142452    9377 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-265000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:41:17.142753    9377 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:41:17.151358    9377 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:41:17.201824    9377 start.go:159] libmachine.API.Create for "default-k8s-diff-port-265000" (driver="qemu2")
	I0702 21:41:17.201882    9377 client.go:168] LocalClient.Create starting
	I0702 21:41:17.202005    9377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:41:17.202078    9377 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:17.202096    9377 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:17.202160    9377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:41:17.202205    9377 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:17.202218    9377 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:17.202771    9377 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:41:17.340865    9377 main.go:141] libmachine: Creating SSH key...
	I0702 21:41:17.478946    9377 main.go:141] libmachine: Creating Disk image...
	I0702 21:41:17.478956    9377 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:41:17.479148    9377 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2
	I0702 21:41:17.489096    9377 main.go:141] libmachine: STDOUT: 
	I0702 21:41:17.489116    9377 main.go:141] libmachine: STDERR: 
	I0702 21:41:17.489168    9377 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2 +20000M
	I0702 21:41:17.497433    9377 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:41:17.497448    9377 main.go:141] libmachine: STDERR: 
	I0702 21:41:17.497462    9377 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2
	I0702 21:41:17.497466    9377 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:41:17.497495    9377 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:40:bb:c4:77:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2
	I0702 21:41:17.499232    9377 main.go:141] libmachine: STDOUT: 
	I0702 21:41:17.499250    9377 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:41:17.499263    9377 client.go:171] duration metric: took 297.382041ms to LocalClient.Create
	I0702 21:41:19.499454    9377 start.go:128] duration metric: took 2.35672575s to createHost
	I0702 21:41:19.499509    9377 start.go:83] releasing machines lock for "default-k8s-diff-port-265000", held for 2.357159084s
	W0702 21:41:19.499682    9377 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-265000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-265000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:19.512124    9377 out.go:177] 
	W0702 21:41:19.515051    9377 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:41:19.515059    9377 out.go:239] * 
	* 
	W0702 21:41:19.515941    9377 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:41:19.527020    9377 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-265000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000: exit status 7 (39.165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-265000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-265000 create -f testdata/busybox.yaml: exit status 1 (27.15225ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-265000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-265000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000: exit status 7 (29.501292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-265000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000: exit status 7 (29.107042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-265000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-265000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-265000 describe deploy/metrics-server -n kube-system: exit status 1 (26.273125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-265000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-265000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000: exit status 7 (30.262959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-265000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-265000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.177130292s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-265000" primary control-plane node in "default-k8s-diff-port-265000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-265000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-265000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:41:21.905151    9430 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:41:21.905279    9430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:21.905292    9430 out.go:304] Setting ErrFile to fd 2...
	I0702 21:41:21.905295    9430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:21.905424    9430 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:41:21.906452    9430 out.go:298] Setting JSON to false
	I0702 21:41:21.922833    9430 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6050,"bootTime":1719975631,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:41:21.922905    9430 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:41:21.926518    9430 out.go:177] * [default-k8s-diff-port-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:41:21.933399    9430 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:41:21.933433    9430 notify.go:220] Checking for updates...
	I0702 21:41:21.940363    9430 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:41:21.943361    9430 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:41:21.946444    9430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:41:21.949380    9430 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:41:21.952400    9430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:41:21.955759    9430 config.go:182] Loaded profile config "default-k8s-diff-port-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:41:21.956071    9430 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:41:21.960371    9430 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:41:21.967369    9430 start.go:297] selected driver: qemu2
	I0702 21:41:21.967379    9430 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-265000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:41:21.967436    9430 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:41:21.970058    9430 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:41:21.970097    9430 cni.go:84] Creating CNI manager for ""
	I0702 21:41:21.970105    9430 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:41:21.970130    9430 start.go:340] cluster config:
	{Name:default-k8s-diff-port-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-265000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:41:21.974054    9430 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:41:21.981354    9430 out.go:177] * Starting "default-k8s-diff-port-265000" primary control-plane node in "default-k8s-diff-port-265000" cluster
	I0702 21:41:21.985405    9430 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:41:21.985434    9430 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:41:21.985444    9430 cache.go:56] Caching tarball of preloaded images
	I0702 21:41:21.985528    9430 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:41:21.985536    9430 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:41:21.985597    9430 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/default-k8s-diff-port-265000/config.json ...
	I0702 21:41:21.985924    9430 start.go:360] acquireMachinesLock for default-k8s-diff-port-265000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:41:21.985961    9430 start.go:364] duration metric: took 28.584µs to acquireMachinesLock for "default-k8s-diff-port-265000"
	I0702 21:41:21.985972    9430 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:41:21.985977    9430 fix.go:54] fixHost starting: 
	I0702 21:41:21.986093    9430 fix.go:112] recreateIfNeeded on default-k8s-diff-port-265000: state=Stopped err=<nil>
	W0702 21:41:21.986102    9430 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:41:21.989395    9430 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-265000" ...
	I0702 21:41:21.996416    9430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:40:bb:c4:77:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2
	I0702 21:41:21.998770    9430 main.go:141] libmachine: STDOUT: 
	I0702 21:41:21.998787    9430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:41:21.998814    9430 fix.go:56] duration metric: took 12.835375ms for fixHost
	I0702 21:41:21.998818    9430 start.go:83] releasing machines lock for "default-k8s-diff-port-265000", held for 12.853167ms
	W0702 21:41:21.998827    9430 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:41:21.998870    9430 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:21.998874    9430 start.go:728] Will try again in 5 seconds ...
	I0702 21:41:27.000890    9430 start.go:360] acquireMachinesLock for default-k8s-diff-port-265000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:41:27.001009    9430 start.go:364] duration metric: took 94.834µs to acquireMachinesLock for "default-k8s-diff-port-265000"
	I0702 21:41:27.001041    9430 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:41:27.001045    9430 fix.go:54] fixHost starting: 
	I0702 21:41:27.001198    9430 fix.go:112] recreateIfNeeded on default-k8s-diff-port-265000: state=Stopped err=<nil>
	W0702 21:41:27.001204    9430 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:41:27.005274    9430 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-265000" ...
	I0702 21:41:27.012326    9430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:40:bb:c4:77:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/default-k8s-diff-port-265000/disk.qcow2
	I0702 21:41:27.014447    9430 main.go:141] libmachine: STDOUT: 
	I0702 21:41:27.014465    9430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:41:27.014487    9430 fix.go:56] duration metric: took 13.442208ms for fixHost
	I0702 21:41:27.014491    9430 start.go:83] releasing machines lock for "default-k8s-diff-port-265000", held for 13.4775ms
	W0702 21:41:27.014538    9430 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-265000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-265000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:27.025259    9430 out.go:177] 
	W0702 21:41:27.028409    9430 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:41:27.028416    9430 out.go:239] * 
	* 
	W0702 21:41:27.028899    9430 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:41:27.042318    9430 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-265000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000: exit status 7 (32.444333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-265000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000: exit status 7 (29.352333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-265000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-265000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-265000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.05625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-265000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-265000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000: exit status 7 (31.905083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-265000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000: exit status 7 (30.003958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-265000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-265000 --alsologtostderr -v=1: exit status 83 (40.605083ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-265000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-265000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:41:27.271497    9449 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:41:27.271672    9449 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:27.271678    9449 out.go:304] Setting ErrFile to fd 2...
	I0702 21:41:27.271680    9449 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:27.271802    9449 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:41:27.272045    9449 out.go:298] Setting JSON to false
	I0702 21:41:27.272054    9449 mustload.go:65] Loading cluster: default-k8s-diff-port-265000
	I0702 21:41:27.272225    9449 config.go:182] Loaded profile config "default-k8s-diff-port-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:41:27.275697    9449 out.go:177] * The control-plane node default-k8s-diff-port-265000 host is not running: state=Stopped
	I0702 21:41:27.279675    9449 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-265000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-265000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000: exit status 7 (28.898125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-265000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000: exit status 7 (29.62325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-777000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-777000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (9.698211542s)

                                                
                                                
-- stdout --
	* [newest-cni-777000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-777000" primary control-plane node in "newest-cni-777000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-777000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:41:27.577115    9466 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:41:27.577383    9466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:27.577390    9466 out.go:304] Setting ErrFile to fd 2...
	I0702 21:41:27.577392    9466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:27.577535    9466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:41:27.578868    9466 out.go:298] Setting JSON to false
	I0702 21:41:27.595438    9466 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6056,"bootTime":1719975631,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:41:27.595502    9466 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:41:27.600709    9466 out.go:177] * [newest-cni-777000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:41:27.606712    9466 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:41:27.606770    9466 notify.go:220] Checking for updates...
	I0702 21:41:27.613665    9466 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:41:27.616605    9466 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:41:27.619677    9466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:41:27.626639    9466 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:41:27.629707    9466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:41:27.632937    9466 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:41:27.632999    9466 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:41:27.633050    9466 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:41:27.637656    9466 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:41:27.644644    9466 start.go:297] selected driver: qemu2
	I0702 21:41:27.644651    9466 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:41:27.644657    9466 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:41:27.647118    9466 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0702 21:41:27.647152    9466 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0702 21:41:27.654676    9466 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:41:27.657745    9466 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0702 21:41:27.657798    9466 cni.go:84] Creating CNI manager for ""
	I0702 21:41:27.657807    9466 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:41:27.657811    9466 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:41:27.657850    9466 start.go:340] cluster config:
	{Name:newest-cni-777000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:newest-cni-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:41:27.661662    9466 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:41:27.669682    9466 out.go:177] * Starting "newest-cni-777000" primary control-plane node in "newest-cni-777000" cluster
	I0702 21:41:27.673483    9466 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:41:27.673498    9466 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:41:27.673505    9466 cache.go:56] Caching tarball of preloaded images
	I0702 21:41:27.673573    9466 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:41:27.673579    9466 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:41:27.673646    9466 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/newest-cni-777000/config.json ...
	I0702 21:41:27.673661    9466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/newest-cni-777000/config.json: {Name:mk44660bcbe8151ee003eaf88171b7a9832537d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:41:27.673983    9466 start.go:360] acquireMachinesLock for newest-cni-777000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:41:27.674017    9466 start.go:364] duration metric: took 28.041µs to acquireMachinesLock for "newest-cni-777000"
	I0702 21:41:27.674030    9466 start.go:93] Provisioning new machine with config: &{Name:newest-cni-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:newest-cni-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:41:27.674075    9466 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:41:27.682448    9466 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:41:27.699293    9466 start.go:159] libmachine.API.Create for "newest-cni-777000" (driver="qemu2")
	I0702 21:41:27.699321    9466 client.go:168] LocalClient.Create starting
	I0702 21:41:27.699387    9466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:41:27.699421    9466 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:27.699429    9466 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:27.699469    9466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:41:27.699492    9466 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:27.699500    9466 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:27.699893    9466 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:41:27.826133    9466 main.go:141] libmachine: Creating SSH key...
	I0702 21:41:27.891880    9466 main.go:141] libmachine: Creating Disk image...
	I0702 21:41:27.891887    9466 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:41:27.892080    9466 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2
	I0702 21:41:27.901440    9466 main.go:141] libmachine: STDOUT: 
	I0702 21:41:27.901466    9466 main.go:141] libmachine: STDERR: 
	I0702 21:41:27.901525    9466 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2 +20000M
	I0702 21:41:27.910949    9466 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:41:27.910980    9466 main.go:141] libmachine: STDERR: 
	I0702 21:41:27.910996    9466 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2
	I0702 21:41:27.911002    9466 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:41:27.911037    9466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:fd:a1:bc:47:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2
	I0702 21:41:27.913234    9466 main.go:141] libmachine: STDOUT: 
	I0702 21:41:27.913254    9466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:41:27.913271    9466 client.go:171] duration metric: took 213.948625ms to LocalClient.Create
	I0702 21:41:29.915344    9466 start.go:128] duration metric: took 2.241300916s to createHost
	I0702 21:41:29.915367    9466 start.go:83] releasing machines lock for "newest-cni-777000", held for 2.241390042s
	W0702 21:41:29.915410    9466 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:29.920221    9466 out.go:177] * Deleting "newest-cni-777000" in qemu2 ...
	W0702 21:41:29.935614    9466 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:29.935622    9466 start.go:728] Will try again in 5 seconds ...
	I0702 21:41:34.937618    9466 start.go:360] acquireMachinesLock for newest-cni-777000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:41:34.937883    9466 start.go:364] duration metric: took 223.333µs to acquireMachinesLock for "newest-cni-777000"
	I0702 21:41:34.937962    9466 start.go:93] Provisioning new machine with config: &{Name:newest-cni-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:newest-cni-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:41:34.938106    9466 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:41:34.948411    9466 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0702 21:41:34.974816    9466 start.go:159] libmachine.API.Create for "newest-cni-777000" (driver="qemu2")
	I0702 21:41:34.974853    9466 client.go:168] LocalClient.Create starting
	I0702 21:41:34.974937    9466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:41:34.974979    9466 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:34.974991    9466 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:34.975042    9466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:41:34.975073    9466 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:34.975080    9466 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:34.975648    9466 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:41:35.103503    9466 main.go:141] libmachine: Creating SSH key...
	I0702 21:41:35.194707    9466 main.go:141] libmachine: Creating Disk image...
	I0702 21:41:35.194714    9466 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:41:35.194905    9466 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2
	I0702 21:41:35.204444    9466 main.go:141] libmachine: STDOUT: 
	I0702 21:41:35.204460    9466 main.go:141] libmachine: STDERR: 
	I0702 21:41:35.204504    9466 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2 +20000M
	I0702 21:41:35.212419    9466 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:41:35.212440    9466 main.go:141] libmachine: STDERR: 
	I0702 21:41:35.212452    9466 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2
	I0702 21:41:35.212457    9466 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:41:35.212488    9466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:52:68:63:56:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2
	I0702 21:41:35.214142    9466 main.go:141] libmachine: STDOUT: 
	I0702 21:41:35.214165    9466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:41:35.214178    9466 client.go:171] duration metric: took 239.32675ms to LocalClient.Create
	I0702 21:41:37.216222    9466 start.go:128] duration metric: took 2.278145375s to createHost
	I0702 21:41:37.216276    9466 start.go:83] releasing machines lock for "newest-cni-777000", held for 2.278414041s
	W0702 21:41:37.216404    9466 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:37.227689    9466 out.go:177] 
	W0702 21:41:37.230739    9466 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:41:37.230762    9466 out.go:239] * 
	* 
	W0702 21:41:37.231357    9466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:41:37.237644    9466 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-777000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-777000 -n newest-cni-777000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-777000 -n newest-cni-777000: exit status 7 (35.133333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-777000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-777000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-777000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.173556375s)

                                                
                                                
-- stdout --
	* [newest-cni-777000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-777000" primary control-plane node in "newest-cni-777000" cluster
	* Restarting existing qemu2 VM for "newest-cni-777000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-777000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:41:40.683204    9517 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:41:40.683351    9517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:40.683355    9517 out.go:304] Setting ErrFile to fd 2...
	I0702 21:41:40.683357    9517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:40.683494    9517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:41:40.684533    9517 out.go:298] Setting JSON to false
	I0702 21:41:40.700916    9517 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6069,"bootTime":1719975631,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:41:40.701008    9517 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:41:40.704662    9517 out.go:177] * [newest-cni-777000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:41:40.711771    9517 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:41:40.711827    9517 notify.go:220] Checking for updates...
	I0702 21:41:40.718688    9517 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:41:40.721802    9517 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:41:40.724638    9517 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:41:40.727717    9517 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:41:40.730834    9517 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:41:40.733942    9517 config.go:182] Loaded profile config "newest-cni-777000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:41:40.734183    9517 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:41:40.737712    9517 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:41:40.744682    9517 start.go:297] selected driver: qemu2
	I0702 21:41:40.744689    9517 start.go:901] validating driver "qemu2" against &{Name:newest-cni-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:newest-cni-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:41:40.744751    9517 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:41:40.747097    9517 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0702 21:41:40.747133    9517 cni.go:84] Creating CNI manager for ""
	I0702 21:41:40.747140    9517 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:41:40.747164    9517 start.go:340] cluster config:
	{Name:newest-cni-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:newest-cni-777000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:41:40.750438    9517 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:41:40.758697    9517 out.go:177] * Starting "newest-cni-777000" primary control-plane node in "newest-cni-777000" cluster
	I0702 21:41:40.762711    9517 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:41:40.762727    9517 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:41:40.762738    9517 cache.go:56] Caching tarball of preloaded images
	I0702 21:41:40.762804    9517 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:41:40.762816    9517 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:41:40.762869    9517 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/newest-cni-777000/config.json ...
	I0702 21:41:40.763255    9517 start.go:360] acquireMachinesLock for newest-cni-777000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:41:40.763281    9517 start.go:364] duration metric: took 20.459µs to acquireMachinesLock for "newest-cni-777000"
	I0702 21:41:40.763300    9517 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:41:40.763304    9517 fix.go:54] fixHost starting: 
	I0702 21:41:40.763407    9517 fix.go:112] recreateIfNeeded on newest-cni-777000: state=Stopped err=<nil>
	W0702 21:41:40.763415    9517 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:41:40.767591    9517 out.go:177] * Restarting existing qemu2 VM for "newest-cni-777000" ...
	I0702 21:41:40.775733    9517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:52:68:63:56:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2
	I0702 21:41:40.777566    9517 main.go:141] libmachine: STDOUT: 
	I0702 21:41:40.777584    9517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:41:40.777608    9517 fix.go:56] duration metric: took 14.304208ms for fixHost
	I0702 21:41:40.777611    9517 start.go:83] releasing machines lock for "newest-cni-777000", held for 14.327291ms
	W0702 21:41:40.777617    9517 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:41:40.777645    9517 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:40.777649    9517 start.go:728] Will try again in 5 seconds ...
	I0702 21:41:45.779853    9517 start.go:360] acquireMachinesLock for newest-cni-777000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:41:45.780337    9517 start.go:364] duration metric: took 351.333µs to acquireMachinesLock for "newest-cni-777000"
	I0702 21:41:45.780500    9517 start.go:96] Skipping create...Using existing machine configuration
	I0702 21:41:45.780522    9517 fix.go:54] fixHost starting: 
	I0702 21:41:45.781256    9517 fix.go:112] recreateIfNeeded on newest-cni-777000: state=Stopped err=<nil>
	W0702 21:41:45.781282    9517 fix.go:138] unexpected machine state, will restart: <nil>
	I0702 21:41:45.785800    9517 out.go:177] * Restarting existing qemu2 VM for "newest-cni-777000" ...
	I0702 21:41:45.788947    9517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:52:68:63:56:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/newest-cni-777000/disk.qcow2
	I0702 21:41:45.796704    9517 main.go:141] libmachine: STDOUT: 
	I0702 21:41:45.796763    9517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:41:45.796848    9517 fix.go:56] duration metric: took 16.329542ms for fixHost
	I0702 21:41:45.796863    9517 start.go:83] releasing machines lock for "newest-cni-777000", held for 16.504542ms
	W0702 21:41:45.797106    9517 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-777000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-777000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:45.804785    9517 out.go:177] 
	W0702 21:41:45.808842    9517 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:41:45.808856    9517 out.go:239] * 
	* 
	W0702 21:41:45.810192    9517 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:41:45.818773    9517 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-777000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-777000 -n newest-cni-777000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-777000 -n newest-cni-777000: exit status 7 (50.002291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-777000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-777000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-777000 -n newest-cni-777000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-777000 -n newest-cni-777000: exit status 7 (29.6955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-777000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-777000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-777000 --alsologtostderr -v=1: exit status 83 (44.6315ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-777000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-777000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:41:45.978155    9531 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:41:45.978335    9531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:45.978340    9531 out.go:304] Setting ErrFile to fd 2...
	I0702 21:41:45.978343    9531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:45.978507    9531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:41:45.978750    9531 out.go:298] Setting JSON to false
	I0702 21:41:45.978761    9531 mustload.go:65] Loading cluster: newest-cni-777000
	I0702 21:41:45.978962    9531 config.go:182] Loaded profile config "newest-cni-777000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:41:45.983312    9531 out.go:177] * The control-plane node newest-cni-777000 host is not running: state=Stopped
	I0702 21:41:45.987322    9531 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-777000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-777000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-777000 -n newest-cni-777000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-777000 -n newest-cni-777000: exit status 7 (30.203458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-777000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-777000 -n newest-cni-777000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-777000 -n newest-cni-777000: exit status 7 (30.350709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-777000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.83170425s)

                                                
                                                
-- stdout --
	* [auto-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-967000" primary control-plane node in "auto-967000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:41:46.286831    9548 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:41:46.286985    9548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:46.286994    9548 out.go:304] Setting ErrFile to fd 2...
	I0702 21:41:46.286996    9548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:46.287143    9548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:41:46.288303    9548 out.go:298] Setting JSON to false
	I0702 21:41:46.304732    9548 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6075,"bootTime":1719975631,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:41:46.304798    9548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:41:46.308413    9548 out.go:177] * [auto-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:41:46.315367    9548 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:41:46.315454    9548 notify.go:220] Checking for updates...
	I0702 21:41:46.322321    9548 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:41:46.325389    9548 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:41:46.328367    9548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:41:46.331286    9548 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:41:46.334347    9548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:41:46.337532    9548 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:41:46.337596    9548 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:41:46.337648    9548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:41:46.341340    9548 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:41:46.348280    9548 start.go:297] selected driver: qemu2
	I0702 21:41:46.348287    9548 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:41:46.348293    9548 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:41:46.350675    9548 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:41:46.354302    9548 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:41:46.357414    9548 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:41:46.357439    9548 cni.go:84] Creating CNI manager for ""
	I0702 21:41:46.357448    9548 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:41:46.357452    9548 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:41:46.357479    9548 start.go:340] cluster config:
	{Name:auto-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:auto-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:41:46.361046    9548 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:41:46.369346    9548 out.go:177] * Starting "auto-967000" primary control-plane node in "auto-967000" cluster
	I0702 21:41:46.373161    9548 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:41:46.373177    9548 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:41:46.373188    9548 cache.go:56] Caching tarball of preloaded images
	I0702 21:41:46.373256    9548 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:41:46.373262    9548 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:41:46.373336    9548 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/auto-967000/config.json ...
	I0702 21:41:46.373353    9548 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/auto-967000/config.json: {Name:mkc7d1e11d54fadeb87904a712e6e1cd33c24d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:41:46.373672    9548 start.go:360] acquireMachinesLock for auto-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:41:46.373705    9548 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "auto-967000"
	I0702 21:41:46.373718    9548 start.go:93] Provisioning new machine with config: &{Name:auto-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:auto-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:41:46.373753    9548 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:41:46.378368    9548 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:41:46.395885    9548 start.go:159] libmachine.API.Create for "auto-967000" (driver="qemu2")
	I0702 21:41:46.395906    9548 client.go:168] LocalClient.Create starting
	I0702 21:41:46.395967    9548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:41:46.395996    9548 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:46.396004    9548 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:46.396047    9548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:41:46.396070    9548 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:46.396081    9548 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:46.396465    9548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:41:46.522295    9548 main.go:141] libmachine: Creating SSH key...
	I0702 21:41:46.703006    9548 main.go:141] libmachine: Creating Disk image...
	I0702 21:41:46.703014    9548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:41:46.703232    9548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/disk.qcow2
	I0702 21:41:46.713293    9548 main.go:141] libmachine: STDOUT: 
	I0702 21:41:46.713313    9548 main.go:141] libmachine: STDERR: 
	I0702 21:41:46.713368    9548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/disk.qcow2 +20000M
	I0702 21:41:46.721464    9548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:41:46.721478    9548 main.go:141] libmachine: STDERR: 
	I0702 21:41:46.721489    9548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/disk.qcow2
	I0702 21:41:46.721494    9548 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:41:46.721518    9548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:55:e9:b3:93:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/disk.qcow2
	I0702 21:41:46.723229    9548 main.go:141] libmachine: STDOUT: 
	I0702 21:41:46.723243    9548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:41:46.723265    9548 client.go:171] duration metric: took 327.360375ms to LocalClient.Create
	I0702 21:41:48.725437    9548 start.go:128] duration metric: took 2.351698125s to createHost
	I0702 21:41:48.725547    9548 start.go:83] releasing machines lock for "auto-967000", held for 2.351878167s
	W0702 21:41:48.725630    9548 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:48.738752    9548 out.go:177] * Deleting "auto-967000" in qemu2 ...
	W0702 21:41:48.763799    9548 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:48.763831    9548 start.go:728] Will try again in 5 seconds ...
	I0702 21:41:53.766006    9548 start.go:360] acquireMachinesLock for auto-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:41:53.766616    9548 start.go:364] duration metric: took 489.5µs to acquireMachinesLock for "auto-967000"
	I0702 21:41:53.766773    9548 start.go:93] Provisioning new machine with config: &{Name:auto-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:auto-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:41:53.767100    9548 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:41:53.776759    9548 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:41:53.828955    9548 start.go:159] libmachine.API.Create for "auto-967000" (driver="qemu2")
	I0702 21:41:53.829004    9548 client.go:168] LocalClient.Create starting
	I0702 21:41:53.829132    9548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:41:53.829200    9548 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:53.829218    9548 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:53.829282    9548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:41:53.829327    9548 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:53.829339    9548 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:53.829851    9548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:41:53.969978    9548 main.go:141] libmachine: Creating SSH key...
	I0702 21:41:54.039938    9548 main.go:141] libmachine: Creating Disk image...
	I0702 21:41:54.039946    9548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:41:54.040129    9548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/disk.qcow2
	I0702 21:41:54.049353    9548 main.go:141] libmachine: STDOUT: 
	I0702 21:41:54.049371    9548 main.go:141] libmachine: STDERR: 
	I0702 21:41:54.049416    9548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/disk.qcow2 +20000M
	I0702 21:41:54.057554    9548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:41:54.057566    9548 main.go:141] libmachine: STDERR: 
	I0702 21:41:54.057578    9548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/disk.qcow2
	I0702 21:41:54.057587    9548 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:41:54.057610    9548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:7e:92:7d:1b:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/auto-967000/disk.qcow2
	I0702 21:41:54.059246    9548 main.go:141] libmachine: STDOUT: 
	I0702 21:41:54.059261    9548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:41:54.059273    9548 client.go:171] duration metric: took 230.267667ms to LocalClient.Create
	I0702 21:41:56.061303    9548 start.go:128] duration metric: took 2.294225625s to createHost
	I0702 21:41:56.061324    9548 start.go:83] releasing machines lock for "auto-967000", held for 2.2947275s
	W0702 21:41:56.061386    9548 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:41:56.065726    9548 out.go:177] 
	W0702 21:41:56.070623    9548 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:41:56.070635    9548 out.go:239] * 
	* 
	W0702 21:41:56.071084    9548 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:41:56.082632    9548 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.858871458s)

                                                
                                                
-- stdout --
	* [calico-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-967000" primary control-plane node in "calico-967000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:41:58.269795    9661 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:41:58.269932    9661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:58.269936    9661 out.go:304] Setting ErrFile to fd 2...
	I0702 21:41:58.269939    9661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:41:58.270086    9661 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:41:58.271256    9661 out.go:298] Setting JSON to false
	I0702 21:41:58.287961    9661 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6087,"bootTime":1719975631,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:41:58.288029    9661 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:41:58.293612    9661 out.go:177] * [calico-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:41:58.301537    9661 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:41:58.301603    9661 notify.go:220] Checking for updates...
	I0702 21:41:58.308498    9661 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:41:58.311530    9661 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:41:58.314569    9661 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:41:58.315977    9661 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:41:58.319531    9661 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:41:58.322804    9661 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:41:58.322871    9661 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:41:58.322927    9661 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:41:58.327346    9661 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:41:58.334490    9661 start.go:297] selected driver: qemu2
	I0702 21:41:58.334498    9661 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:41:58.334506    9661 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:41:58.336766    9661 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:41:58.340474    9661 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:41:58.343615    9661 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:41:58.343661    9661 cni.go:84] Creating CNI manager for "calico"
	I0702 21:41:58.343669    9661 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0702 21:41:58.343708    9661 start.go:340] cluster config:
	{Name:calico-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:calico-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:41:58.347515    9661 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:41:58.354491    9661 out.go:177] * Starting "calico-967000" primary control-plane node in "calico-967000" cluster
	I0702 21:41:58.358511    9661 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:41:58.358527    9661 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:41:58.358539    9661 cache.go:56] Caching tarball of preloaded images
	I0702 21:41:58.358604    9661 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:41:58.358611    9661 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:41:58.358665    9661 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/calico-967000/config.json ...
	I0702 21:41:58.358677    9661 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/calico-967000/config.json: {Name:mk5c3108847abf50392c46c1a8ac7ffcebbd2901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:41:58.358894    9661 start.go:360] acquireMachinesLock for calico-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:41:58.358926    9661 start.go:364] duration metric: took 27µs to acquireMachinesLock for "calico-967000"
	I0702 21:41:58.358939    9661 start.go:93] Provisioning new machine with config: &{Name:calico-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:calico-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:41:58.358976    9661 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:41:58.366497    9661 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:41:58.383022    9661 start.go:159] libmachine.API.Create for "calico-967000" (driver="qemu2")
	I0702 21:41:58.383045    9661 client.go:168] LocalClient.Create starting
	I0702 21:41:58.383107    9661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:41:58.383136    9661 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:58.383143    9661 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:58.383182    9661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:41:58.383204    9661 main.go:141] libmachine: Decoding PEM data...
	I0702 21:41:58.383212    9661 main.go:141] libmachine: Parsing certificate...
	I0702 21:41:58.383536    9661 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:41:58.512271    9661 main.go:141] libmachine: Creating SSH key...
	I0702 21:41:58.590216    9661 main.go:141] libmachine: Creating Disk image...
	I0702 21:41:58.590227    9661 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:41:58.590408    9661 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/disk.qcow2
	I0702 21:41:58.599892    9661 main.go:141] libmachine: STDOUT: 
	I0702 21:41:58.599912    9661 main.go:141] libmachine: STDERR: 
	I0702 21:41:58.599974    9661 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/disk.qcow2 +20000M
	I0702 21:41:58.608152    9661 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:41:58.608167    9661 main.go:141] libmachine: STDERR: 
	I0702 21:41:58.608184    9661 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/disk.qcow2
	I0702 21:41:58.608189    9661 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:41:58.608224    9661 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:06:97:90:ce:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/disk.qcow2
	I0702 21:41:58.609776    9661 main.go:141] libmachine: STDOUT: 
	I0702 21:41:58.609790    9661 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:41:58.609811    9661 client.go:171] duration metric: took 226.766292ms to LocalClient.Create
	I0702 21:42:00.611947    9661 start.go:128] duration metric: took 2.252990875s to createHost
	I0702 21:42:00.612055    9661 start.go:83] releasing machines lock for "calico-967000", held for 2.25311025s
	W0702 21:42:00.612113    9661 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:00.621880    9661 out.go:177] * Deleting "calico-967000" in qemu2 ...
	W0702 21:42:00.638175    9661 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:00.638203    9661 start.go:728] Will try again in 5 seconds ...
	I0702 21:42:05.640302    9661 start.go:360] acquireMachinesLock for calico-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:42:05.640786    9661 start.go:364] duration metric: took 386.333µs to acquireMachinesLock for "calico-967000"
	I0702 21:42:05.640927    9661 start.go:93] Provisioning new machine with config: &{Name:calico-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:calico-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:42:05.641219    9661 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:42:05.650856    9661 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:42:05.702318    9661 start.go:159] libmachine.API.Create for "calico-967000" (driver="qemu2")
	I0702 21:42:05.702379    9661 client.go:168] LocalClient.Create starting
	I0702 21:42:05.702487    9661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:42:05.702554    9661 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:05.702572    9661 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:05.702636    9661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:42:05.702682    9661 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:05.702709    9661 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:05.703212    9661 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:42:05.841807    9661 main.go:141] libmachine: Creating SSH key...
	I0702 21:42:06.046808    9661 main.go:141] libmachine: Creating Disk image...
	I0702 21:42:06.046820    9661 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:42:06.047029    9661 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/disk.qcow2
	I0702 21:42:06.056910    9661 main.go:141] libmachine: STDOUT: 
	I0702 21:42:06.056925    9661 main.go:141] libmachine: STDERR: 
	I0702 21:42:06.056966    9661 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/disk.qcow2 +20000M
	I0702 21:42:06.065188    9661 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:42:06.065203    9661 main.go:141] libmachine: STDERR: 
	I0702 21:42:06.065226    9661 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/disk.qcow2
	I0702 21:42:06.065231    9661 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:42:06.065264    9661 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:be:a8:b3:6c:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/calico-967000/disk.qcow2
	I0702 21:42:06.066997    9661 main.go:141] libmachine: STDOUT: 
	I0702 21:42:06.067012    9661 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:42:06.067025    9661 client.go:171] duration metric: took 364.648083ms to LocalClient.Create
	I0702 21:42:08.069070    9661 start.go:128] duration metric: took 2.427875s to createHost
	I0702 21:42:08.069094    9661 start.go:83] releasing machines lock for "calico-967000", held for 2.428334667s
	W0702 21:42:08.069217    9661 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:08.080016    9661 out.go:177] 
	W0702 21:42:08.083979    9661 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:42:08.083987    9661 out.go:239] * 
	* 
	W0702 21:42:08.084609    9661 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:42:08.095996    9661 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.82921375s)

                                                
                                                
-- stdout --
	* [custom-flannel-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-967000" primary control-plane node in "custom-flannel-967000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:42:10.472907    9789 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:42:10.473028    9789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:42:10.473032    9789 out.go:304] Setting ErrFile to fd 2...
	I0702 21:42:10.473035    9789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:42:10.473224    9789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:42:10.474296    9789 out.go:298] Setting JSON to false
	I0702 21:42:10.491051    9789 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6099,"bootTime":1719975631,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:42:10.491121    9789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:42:10.495897    9789 out.go:177] * [custom-flannel-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:42:10.502948    9789 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:42:10.503026    9789 notify.go:220] Checking for updates...
	I0702 21:42:10.509915    9789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:42:10.512967    9789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:42:10.515950    9789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:42:10.518897    9789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:42:10.521934    9789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:42:10.525284    9789 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:42:10.525349    9789 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:42:10.525397    9789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:42:10.529893    9789 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:42:10.536865    9789 start.go:297] selected driver: qemu2
	I0702 21:42:10.536872    9789 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:42:10.536878    9789 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:42:10.539240    9789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:42:10.541949    9789 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:42:10.544984    9789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:42:10.545015    9789 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0702 21:42:10.545027    9789 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0702 21:42:10.545057    9789 start.go:340] cluster config:
	{Name:custom-flannel-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:42:10.548816    9789 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:42:10.556870    9789 out.go:177] * Starting "custom-flannel-967000" primary control-plane node in "custom-flannel-967000" cluster
	I0702 21:42:10.561043    9789 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:42:10.561059    9789 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:42:10.561068    9789 cache.go:56] Caching tarball of preloaded images
	I0702 21:42:10.561125    9789 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:42:10.561130    9789 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:42:10.561198    9789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/custom-flannel-967000/config.json ...
	I0702 21:42:10.561212    9789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/custom-flannel-967000/config.json: {Name:mk5789e80c03470d2c62c6a9c2327e795c997b35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:42:10.561528    9789 start.go:360] acquireMachinesLock for custom-flannel-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:42:10.561561    9789 start.go:364] duration metric: took 24.208µs to acquireMachinesLock for "custom-flannel-967000"
	I0702 21:42:10.561572    9789 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:42:10.561598    9789 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:42:10.569904    9789 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:42:10.584886    9789 start.go:159] libmachine.API.Create for "custom-flannel-967000" (driver="qemu2")
	I0702 21:42:10.584912    9789 client.go:168] LocalClient.Create starting
	I0702 21:42:10.584993    9789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:42:10.585030    9789 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:10.585037    9789 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:10.585077    9789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:42:10.585099    9789 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:10.585106    9789 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:10.585429    9789 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:42:10.712091    9789 main.go:141] libmachine: Creating SSH key...
	I0702 21:42:10.858492    9789 main.go:141] libmachine: Creating Disk image...
	I0702 21:42:10.858503    9789 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:42:10.858720    9789 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/disk.qcow2
	I0702 21:42:10.868394    9789 main.go:141] libmachine: STDOUT: 
	I0702 21:42:10.868425    9789 main.go:141] libmachine: STDERR: 
	I0702 21:42:10.868476    9789 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/disk.qcow2 +20000M
	I0702 21:42:10.876748    9789 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:42:10.876766    9789 main.go:141] libmachine: STDERR: 
	I0702 21:42:10.876777    9789 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/disk.qcow2
	I0702 21:42:10.876781    9789 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:42:10.876809    9789 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:2c:4d:c9:fb:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/disk.qcow2
	I0702 21:42:10.878509    9789 main.go:141] libmachine: STDOUT: 
	I0702 21:42:10.878523    9789 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:42:10.878542    9789 client.go:171] duration metric: took 293.631459ms to LocalClient.Create
	I0702 21:42:12.880682    9789 start.go:128] duration metric: took 2.319103334s to createHost
	I0702 21:42:12.880773    9789 start.go:83] releasing machines lock for "custom-flannel-967000", held for 2.319250333s
	W0702 21:42:12.880853    9789 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:12.893743    9789 out.go:177] * Deleting "custom-flannel-967000" in qemu2 ...
	W0702 21:42:12.911968    9789 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:12.911994    9789 start.go:728] Will try again in 5 seconds ...
	I0702 21:42:17.914034    9789 start.go:360] acquireMachinesLock for custom-flannel-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:42:17.914267    9789 start.go:364] duration metric: took 175µs to acquireMachinesLock for "custom-flannel-967000"
	I0702 21:42:17.914291    9789 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:42:17.914401    9789 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:42:17.923655    9789 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:42:17.953875    9789 start.go:159] libmachine.API.Create for "custom-flannel-967000" (driver="qemu2")
	I0702 21:42:17.953922    9789 client.go:168] LocalClient.Create starting
	I0702 21:42:17.954046    9789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:42:17.954111    9789 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:17.954123    9789 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:17.954189    9789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:42:17.954223    9789 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:17.954256    9789 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:17.954677    9789 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:42:18.087420    9789 main.go:141] libmachine: Creating SSH key...
	I0702 21:42:18.218811    9789 main.go:141] libmachine: Creating Disk image...
	I0702 21:42:18.218820    9789 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:42:18.219013    9789 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/disk.qcow2
	I0702 21:42:18.228971    9789 main.go:141] libmachine: STDOUT: 
	I0702 21:42:18.228996    9789 main.go:141] libmachine: STDERR: 
	I0702 21:42:18.229060    9789 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/disk.qcow2 +20000M
	I0702 21:42:18.237348    9789 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:42:18.237364    9789 main.go:141] libmachine: STDERR: 
	I0702 21:42:18.237377    9789 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/disk.qcow2
	I0702 21:42:18.237380    9789 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:42:18.237411    9789 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:1a:11:03:fa:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/custom-flannel-967000/disk.qcow2
	I0702 21:42:18.239145    9789 main.go:141] libmachine: STDOUT: 
	I0702 21:42:18.239161    9789 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:42:18.239174    9789 client.go:171] duration metric: took 285.252917ms to LocalClient.Create
	I0702 21:42:20.241207    9789 start.go:128] duration metric: took 2.32684025s to createHost
	I0702 21:42:20.241228    9789 start.go:83] releasing machines lock for "custom-flannel-967000", held for 2.326998084s
	W0702 21:42:20.241313    9789 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:20.249541    9789 out.go:177] 
	W0702 21:42:20.254535    9789 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:42:20.254560    9789 out.go:239] * 
	* 
	W0702 21:42:20.255058    9789 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:42:20.265517    9789 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.6679095s)

                                                
                                                
-- stdout --
	* [false-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-967000" primary control-plane node in "false-967000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:42:22.602060    9911 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:42:22.602202    9911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:42:22.602206    9911 out.go:304] Setting ErrFile to fd 2...
	I0702 21:42:22.602208    9911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:42:22.602344    9911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:42:22.603384    9911 out.go:298] Setting JSON to false
	I0702 21:42:22.619367    9911 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6111,"bootTime":1719975631,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:42:22.619459    9911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:42:22.625121    9911 out.go:177] * [false-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:42:22.631962    9911 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:42:22.632034    9911 notify.go:220] Checking for updates...
	I0702 21:42:22.639087    9911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:42:22.640434    9911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:42:22.643075    9911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:42:22.646107    9911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:42:22.649108    9911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:42:22.652428    9911 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:42:22.652497    9911 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:42:22.652550    9911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:42:22.657044    9911 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:42:22.664100    9911 start.go:297] selected driver: qemu2
	I0702 21:42:22.664108    9911 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:42:22.664116    9911 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:42:22.666314    9911 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:42:22.669058    9911 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:42:22.672170    9911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:42:22.672185    9911 cni.go:84] Creating CNI manager for "false"
	I0702 21:42:22.672224    9911 start.go:340] cluster config:
	{Name:false-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:false-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:42:22.675844    9911 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:42:22.684101    9911 out.go:177] * Starting "false-967000" primary control-plane node in "false-967000" cluster
	I0702 21:42:22.687001    9911 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:42:22.687014    9911 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:42:22.687022    9911 cache.go:56] Caching tarball of preloaded images
	I0702 21:42:22.687072    9911 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:42:22.687077    9911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:42:22.687137    9911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/false-967000/config.json ...
	I0702 21:42:22.687148    9911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/false-967000/config.json: {Name:mkbe23c2a07f53f65ff806a678cf33b09c36b5ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:42:22.687477    9911 start.go:360] acquireMachinesLock for false-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:42:22.687521    9911 start.go:364] duration metric: took 35.625µs to acquireMachinesLock for "false-967000"
	I0702 21:42:22.687537    9911 start.go:93] Provisioning new machine with config: &{Name:false-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:false-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:42:22.687574    9911 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:42:22.694905    9911 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:42:22.710760    9911 start.go:159] libmachine.API.Create for "false-967000" (driver="qemu2")
	I0702 21:42:22.710784    9911 client.go:168] LocalClient.Create starting
	I0702 21:42:22.710850    9911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:42:22.710891    9911 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:22.710904    9911 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:22.710945    9911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:42:22.710967    9911 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:22.710975    9911 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:22.711377    9911 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:42:22.837227    9911 main.go:141] libmachine: Creating SSH key...
	I0702 21:42:22.868566    9911 main.go:141] libmachine: Creating Disk image...
	I0702 21:42:22.868571    9911 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:42:22.868738    9911 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/disk.qcow2
	I0702 21:42:22.877989    9911 main.go:141] libmachine: STDOUT: 
	I0702 21:42:22.878016    9911 main.go:141] libmachine: STDERR: 
	I0702 21:42:22.878071    9911 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/disk.qcow2 +20000M
	I0702 21:42:22.886052    9911 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:42:22.886066    9911 main.go:141] libmachine: STDERR: 
	I0702 21:42:22.886081    9911 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/disk.qcow2
	I0702 21:42:22.886085    9911 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:42:22.886111    9911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:ad:ba:d8:82:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/disk.qcow2
	I0702 21:42:22.887734    9911 main.go:141] libmachine: STDOUT: 
	I0702 21:42:22.887750    9911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:42:22.887769    9911 client.go:171] duration metric: took 176.983083ms to LocalClient.Create
	I0702 21:42:24.889978    9911 start.go:128] duration metric: took 2.202415583s to createHost
	I0702 21:42:24.890050    9911 start.go:83] releasing machines lock for "false-967000", held for 2.202562666s
	W0702 21:42:24.890111    9911 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:24.900354    9911 out.go:177] * Deleting "false-967000" in qemu2 ...
	W0702 21:42:24.920079    9911 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:24.920107    9911 start.go:728] Will try again in 5 seconds ...
	I0702 21:42:29.922206    9911 start.go:360] acquireMachinesLock for false-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:42:29.922497    9911 start.go:364] duration metric: took 229.542µs to acquireMachinesLock for "false-967000"
	I0702 21:42:29.922582    9911 start.go:93] Provisioning new machine with config: &{Name:false-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:false-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:42:29.922718    9911 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:42:29.930034    9911 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:42:29.961566    9911 start.go:159] libmachine.API.Create for "false-967000" (driver="qemu2")
	I0702 21:42:29.961610    9911 client.go:168] LocalClient.Create starting
	I0702 21:42:29.961705    9911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:42:29.961752    9911 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:29.961766    9911 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:29.961823    9911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:42:29.961861    9911 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:29.961870    9911 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:29.962361    9911 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:42:30.092074    9911 main.go:141] libmachine: Creating SSH key...
	I0702 21:42:30.178679    9911 main.go:141] libmachine: Creating Disk image...
	I0702 21:42:30.178688    9911 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:42:30.178873    9911 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/disk.qcow2
	I0702 21:42:30.189323    9911 main.go:141] libmachine: STDOUT: 
	I0702 21:42:30.189355    9911 main.go:141] libmachine: STDERR: 
	I0702 21:42:30.189439    9911 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/disk.qcow2 +20000M
	I0702 21:42:30.199124    9911 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:42:30.199155    9911 main.go:141] libmachine: STDERR: 
	I0702 21:42:30.199178    9911 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/disk.qcow2
	I0702 21:42:30.199183    9911 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:42:30.199223    9911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:55:dd:ef:b7:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/false-967000/disk.qcow2
	I0702 21:42:30.201542    9911 main.go:141] libmachine: STDOUT: 
	I0702 21:42:30.201568    9911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:42:30.201584    9911 client.go:171] duration metric: took 239.973084ms to LocalClient.Create
	I0702 21:42:32.203746    9911 start.go:128] duration metric: took 2.281039292s to createHost
	I0702 21:42:32.203827    9911 start.go:83] releasing machines lock for "false-967000", held for 2.281357333s
	W0702 21:42:32.204216    9911 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:32.213754    9911 out.go:177] 
	W0702 21:42:32.218913    9911 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:42:32.218929    9911 out.go:239] * 
	* 
	W0702 21:42:32.221439    9911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:42:32.227895    9911 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.71074475s)

                                                
                                                
-- stdout --
	* [kindnet-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-967000" primary control-plane node in "kindnet-967000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:42:34.428019   10028 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:42:34.428132   10028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:42:34.428137   10028 out.go:304] Setting ErrFile to fd 2...
	I0702 21:42:34.428139   10028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:42:34.428253   10028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:42:34.429486   10028 out.go:298] Setting JSON to false
	I0702 21:42:34.446189   10028 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6123,"bootTime":1719975631,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:42:34.446254   10028 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:42:34.451168   10028 out.go:177] * [kindnet-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:42:34.458158   10028 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:42:34.458253   10028 notify.go:220] Checking for updates...
	I0702 21:42:34.465110   10028 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:42:34.468198   10028 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:42:34.471151   10028 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:42:34.474123   10028 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:42:34.477143   10028 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:42:34.480333   10028 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:42:34.480403   10028 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:42:34.480483   10028 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:42:34.484085   10028 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:42:34.490054   10028 start.go:297] selected driver: qemu2
	I0702 21:42:34.490061   10028 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:42:34.490067   10028 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:42:34.492423   10028 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:42:34.495096   10028 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:42:34.498150   10028 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:42:34.498172   10028 cni.go:84] Creating CNI manager for "kindnet"
	I0702 21:42:34.498176   10028 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0702 21:42:34.498206   10028 start.go:340] cluster config:
	{Name:kindnet-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kindnet-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:42:34.501935   10028 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:42:34.510085   10028 out.go:177] * Starting "kindnet-967000" primary control-plane node in "kindnet-967000" cluster
	I0702 21:42:34.514091   10028 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:42:34.514110   10028 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:42:34.514116   10028 cache.go:56] Caching tarball of preloaded images
	I0702 21:42:34.514178   10028 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:42:34.514183   10028 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:42:34.514236   10028 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/kindnet-967000/config.json ...
	I0702 21:42:34.514247   10028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/kindnet-967000/config.json: {Name:mkfc42985f822a012ff79173d12d42a37fa5c73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:42:34.514513   10028 start.go:360] acquireMachinesLock for kindnet-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:42:34.514547   10028 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "kindnet-967000"
	I0702 21:42:34.514560   10028 start.go:93] Provisioning new machine with config: &{Name:kindnet-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kindnet-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:42:34.514591   10028 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:42:34.523086   10028 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:42:34.540518   10028 start.go:159] libmachine.API.Create for "kindnet-967000" (driver="qemu2")
	I0702 21:42:34.540537   10028 client.go:168] LocalClient.Create starting
	I0702 21:42:34.540593   10028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:42:34.540623   10028 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:34.540631   10028 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:34.540669   10028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:42:34.540692   10028 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:34.540700   10028 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:34.541068   10028 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:42:34.696517   10028 main.go:141] libmachine: Creating SSH key...
	I0702 21:42:34.758428   10028 main.go:141] libmachine: Creating Disk image...
	I0702 21:42:34.758435   10028 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:42:34.758615   10028 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/disk.qcow2
	I0702 21:42:34.767932   10028 main.go:141] libmachine: STDOUT: 
	I0702 21:42:34.767956   10028 main.go:141] libmachine: STDERR: 
	I0702 21:42:34.768016   10028 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/disk.qcow2 +20000M
	I0702 21:42:34.775856   10028 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:42:34.775870   10028 main.go:141] libmachine: STDERR: 
	I0702 21:42:34.775889   10028 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/disk.qcow2
	I0702 21:42:34.775906   10028 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:42:34.775936   10028 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:31:29:57:72:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/disk.qcow2
	I0702 21:42:34.777580   10028 main.go:141] libmachine: STDOUT: 
	I0702 21:42:34.777595   10028 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:42:34.777614   10028 client.go:171] duration metric: took 237.076291ms to LocalClient.Create
	I0702 21:42:36.779796   10028 start.go:128] duration metric: took 2.265221167s to createHost
	I0702 21:42:36.779913   10028 start.go:83] releasing machines lock for "kindnet-967000", held for 2.265387166s
	W0702 21:42:36.779997   10028 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:36.791514   10028 out.go:177] * Deleting "kindnet-967000" in qemu2 ...
	W0702 21:42:36.813606   10028 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:36.813637   10028 start.go:728] Will try again in 5 seconds ...
	I0702 21:42:41.815502   10028 start.go:360] acquireMachinesLock for kindnet-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:42:41.815999   10028 start.go:364] duration metric: took 372.583µs to acquireMachinesLock for "kindnet-967000"
	I0702 21:42:41.816063   10028 start.go:93] Provisioning new machine with config: &{Name:kindnet-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kindnet-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:42:41.816349   10028 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:42:41.824960   10028 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:42:41.873954   10028 start.go:159] libmachine.API.Create for "kindnet-967000" (driver="qemu2")
	I0702 21:42:41.874018   10028 client.go:168] LocalClient.Create starting
	I0702 21:42:41.874139   10028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:42:41.874208   10028 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:41.874226   10028 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:41.874287   10028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:42:41.874331   10028 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:41.874343   10028 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:41.875037   10028 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:42:42.013308   10028 main.go:141] libmachine: Creating SSH key...
	I0702 21:42:42.049563   10028 main.go:141] libmachine: Creating Disk image...
	I0702 21:42:42.049569   10028 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:42:42.049752   10028 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/disk.qcow2
	I0702 21:42:42.059280   10028 main.go:141] libmachine: STDOUT: 
	I0702 21:42:42.059295   10028 main.go:141] libmachine: STDERR: 
	I0702 21:42:42.059339   10028 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/disk.qcow2 +20000M
	I0702 21:42:42.067224   10028 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:42:42.067240   10028 main.go:141] libmachine: STDERR: 
	I0702 21:42:42.067254   10028 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/disk.qcow2
	I0702 21:42:42.067267   10028 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:42:42.067305   10028 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:02:f2:da:e7:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kindnet-967000/disk.qcow2
	I0702 21:42:42.068921   10028 main.go:141] libmachine: STDOUT: 
	I0702 21:42:42.068938   10028 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:42:42.068950   10028 client.go:171] duration metric: took 194.929417ms to LocalClient.Create
	I0702 21:42:44.071125   10028 start.go:128] duration metric: took 2.254774792s to createHost
	I0702 21:42:44.071223   10028 start.go:83] releasing machines lock for "kindnet-967000", held for 2.255245625s
	W0702 21:42:44.071635   10028 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:44.082314   10028 out.go:177] 
	W0702 21:42:44.085311   10028 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:42:44.085335   10028 out.go:239] * 
	* 
	W0702 21:42:44.088795   10028 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:42:44.097156   10028 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.719434s)

                                                
                                                
-- stdout --
	* [flannel-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-967000" primary control-plane node in "flannel-967000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:42:46.382078   10148 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:42:46.382229   10148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:42:46.382236   10148 out.go:304] Setting ErrFile to fd 2...
	I0702 21:42:46.382238   10148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:42:46.382363   10148 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:42:46.383509   10148 out.go:298] Setting JSON to false
	I0702 21:42:46.400229   10148 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6135,"bootTime":1719975631,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:42:46.400335   10148 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:42:46.405214   10148 out.go:177] * [flannel-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:42:46.413153   10148 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:42:46.413231   10148 notify.go:220] Checking for updates...
	I0702 21:42:46.420188   10148 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:42:46.423186   10148 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:42:46.426185   10148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:42:46.429152   10148 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:42:46.432124   10148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:42:46.435448   10148 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:42:46.435514   10148 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:42:46.435570   10148 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:42:46.439177   10148 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:42:46.446117   10148 start.go:297] selected driver: qemu2
	I0702 21:42:46.446123   10148 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:42:46.446128   10148 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:42:46.448307   10148 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:42:46.451176   10148 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:42:46.454183   10148 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:42:46.454202   10148 cni.go:84] Creating CNI manager for "flannel"
	I0702 21:42:46.454208   10148 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0702 21:42:46.454245   10148 start.go:340] cluster config:
	{Name:flannel-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:flannel-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:42:46.457809   10148 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:42:46.466230   10148 out.go:177] * Starting "flannel-967000" primary control-plane node in "flannel-967000" cluster
	I0702 21:42:46.470145   10148 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:42:46.470161   10148 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:42:46.470167   10148 cache.go:56] Caching tarball of preloaded images
	I0702 21:42:46.470220   10148 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:42:46.470225   10148 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:42:46.470277   10148 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/flannel-967000/config.json ...
	I0702 21:42:46.470288   10148 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/flannel-967000/config.json: {Name:mk5237f97ef00cbe7f15e5191c26270d6eb1d35c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:42:46.470500   10148 start.go:360] acquireMachinesLock for flannel-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:42:46.470533   10148 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "flannel-967000"
	I0702 21:42:46.470545   10148 start.go:93] Provisioning new machine with config: &{Name:flannel-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:flannel-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:42:46.470578   10148 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:42:46.479131   10148 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:42:46.494990   10148 start.go:159] libmachine.API.Create for "flannel-967000" (driver="qemu2")
	I0702 21:42:46.495011   10148 client.go:168] LocalClient.Create starting
	I0702 21:42:46.495080   10148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:42:46.495110   10148 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:46.495119   10148 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:46.495167   10148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:42:46.495190   10148 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:46.495198   10148 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:46.495615   10148 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:42:46.620403   10148 main.go:141] libmachine: Creating SSH key...
	I0702 21:42:46.747111   10148 main.go:141] libmachine: Creating Disk image...
	I0702 21:42:46.747120   10148 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:42:46.747299   10148 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/disk.qcow2
	I0702 21:42:46.756987   10148 main.go:141] libmachine: STDOUT: 
	I0702 21:42:46.757007   10148 main.go:141] libmachine: STDERR: 
	I0702 21:42:46.757062   10148 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/disk.qcow2 +20000M
	I0702 21:42:46.765236   10148 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:42:46.765254   10148 main.go:141] libmachine: STDERR: 
	I0702 21:42:46.765267   10148 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/disk.qcow2
	I0702 21:42:46.765272   10148 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:42:46.765310   10148 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:45:14:28:17:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/disk.qcow2
	I0702 21:42:46.767024   10148 main.go:141] libmachine: STDOUT: 
	I0702 21:42:46.767041   10148 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:42:46.767069   10148 client.go:171] duration metric: took 272.058458ms to LocalClient.Create
	I0702 21:42:48.769218   10148 start.go:128] duration metric: took 2.298663292s to createHost
	I0702 21:42:48.769276   10148 start.go:83] releasing machines lock for "flannel-967000", held for 2.2987795s
	W0702 21:42:48.769376   10148 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:48.782610   10148 out.go:177] * Deleting "flannel-967000" in qemu2 ...
	W0702 21:42:48.804416   10148 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:48.804454   10148 start.go:728] Will try again in 5 seconds ...
	I0702 21:42:53.805268   10148 start.go:360] acquireMachinesLock for flannel-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:42:53.805374   10148 start.go:364] duration metric: took 87.708µs to acquireMachinesLock for "flannel-967000"
	I0702 21:42:53.805402   10148 start.go:93] Provisioning new machine with config: &{Name:flannel-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:flannel-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:42:53.805440   10148 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:42:53.812669   10148 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:42:53.829178   10148 start.go:159] libmachine.API.Create for "flannel-967000" (driver="qemu2")
	I0702 21:42:53.829215   10148 client.go:168] LocalClient.Create starting
	I0702 21:42:53.829288   10148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:42:53.829325   10148 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:53.829332   10148 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:53.829381   10148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:42:53.829403   10148 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:53.829408   10148 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:53.829695   10148 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:42:53.961374   10148 main.go:141] libmachine: Creating SSH key...
	I0702 21:42:54.014988   10148 main.go:141] libmachine: Creating Disk image...
	I0702 21:42:54.014998   10148 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:42:54.015192   10148 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/disk.qcow2
	I0702 21:42:54.025395   10148 main.go:141] libmachine: STDOUT: 
	I0702 21:42:54.025414   10148 main.go:141] libmachine: STDERR: 
	I0702 21:42:54.025463   10148 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/disk.qcow2 +20000M
	I0702 21:42:54.033763   10148 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:42:54.033781   10148 main.go:141] libmachine: STDERR: 
	I0702 21:42:54.033792   10148 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/disk.qcow2
	I0702 21:42:54.033796   10148 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:42:54.033835   10148 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:ee:70:7b:b8:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/flannel-967000/disk.qcow2
	I0702 21:42:54.035583   10148 main.go:141] libmachine: STDOUT: 
	I0702 21:42:54.035600   10148 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:42:54.035614   10148 client.go:171] duration metric: took 206.398459ms to LocalClient.Create
	I0702 21:42:56.037760   10148 start.go:128] duration metric: took 2.232337167s to createHost
	I0702 21:42:56.037834   10148 start.go:83] releasing machines lock for "flannel-967000", held for 2.232496042s
	W0702 21:42:56.038228   10148 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:42:56.047813   10148 out.go:177] 
	W0702 21:42:56.051813   10148 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:42:56.051854   10148 out.go:239] * 
	* 
	W0702 21:42:56.053303   10148 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:42:56.061743   10148 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.837540583s)

                                                
                                                
-- stdout --
	* [enable-default-cni-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-967000" primary control-plane node in "enable-default-cni-967000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:42:58.374551   10269 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:42:58.374676   10269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:42:58.374680   10269 out.go:304] Setting ErrFile to fd 2...
	I0702 21:42:58.374683   10269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:42:58.374823   10269 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:42:58.375922   10269 out.go:298] Setting JSON to false
	I0702 21:42:58.392065   10269 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6147,"bootTime":1719975631,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:42:58.392132   10269 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:42:58.398282   10269 out.go:177] * [enable-default-cni-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:42:58.405280   10269 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:42:58.405335   10269 notify.go:220] Checking for updates...
	I0702 21:42:58.412263   10269 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:42:58.415224   10269 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:42:58.428269   10269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:42:58.431258   10269 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:42:58.434277   10269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:42:58.437624   10269 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:42:58.437685   10269 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:42:58.437738   10269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:42:58.442243   10269 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:42:58.449233   10269 start.go:297] selected driver: qemu2
	I0702 21:42:58.449241   10269 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:42:58.449247   10269 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:42:58.451372   10269 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:42:58.454232   10269 out.go:177] * Automatically selected the socket_vmnet network
	E0702 21:42:58.457335   10269 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0702 21:42:58.457347   10269 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:42:58.457388   10269 cni.go:84] Creating CNI manager for "bridge"
	I0702 21:42:58.457404   10269 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:42:58.457435   10269 start.go:340] cluster config:
	{Name:enable-default-cni-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:42:58.460843   10269 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:42:58.469303   10269 out.go:177] * Starting "enable-default-cni-967000" primary control-plane node in "enable-default-cni-967000" cluster
	I0702 21:42:58.473089   10269 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:42:58.473104   10269 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:42:58.473112   10269 cache.go:56] Caching tarball of preloaded images
	I0702 21:42:58.473167   10269 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:42:58.473173   10269 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:42:58.473266   10269 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/enable-default-cni-967000/config.json ...
	I0702 21:42:58.473284   10269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/enable-default-cni-967000/config.json: {Name:mk099eceb5e07d68dfed815feb070038695e53cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:42:58.473529   10269 start.go:360] acquireMachinesLock for enable-default-cni-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:42:58.473562   10269 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "enable-default-cni-967000"
	I0702 21:42:58.473577   10269 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:42:58.473607   10269 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:42:58.477239   10269 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:42:58.493110   10269 start.go:159] libmachine.API.Create for "enable-default-cni-967000" (driver="qemu2")
	I0702 21:42:58.493141   10269 client.go:168] LocalClient.Create starting
	I0702 21:42:58.493211   10269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:42:58.493245   10269 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:58.493254   10269 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:58.493296   10269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:42:58.493319   10269 main.go:141] libmachine: Decoding PEM data...
	I0702 21:42:58.493328   10269 main.go:141] libmachine: Parsing certificate...
	I0702 21:42:58.493713   10269 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:42:58.619346   10269 main.go:141] libmachine: Creating SSH key...
	I0702 21:42:58.708391   10269 main.go:141] libmachine: Creating Disk image...
	I0702 21:42:58.708400   10269 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:42:58.708577   10269 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/disk.qcow2
	I0702 21:42:58.717819   10269 main.go:141] libmachine: STDOUT: 
	I0702 21:42:58.717841   10269 main.go:141] libmachine: STDERR: 
	I0702 21:42:58.717887   10269 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/disk.qcow2 +20000M
	I0702 21:42:58.725947   10269 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:42:58.725962   10269 main.go:141] libmachine: STDERR: 
	I0702 21:42:58.725981   10269 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/disk.qcow2
	I0702 21:42:58.725986   10269 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:42:58.726040   10269 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:c6:4d:00:ed:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/disk.qcow2
	I0702 21:42:58.727707   10269 main.go:141] libmachine: STDOUT: 
	I0702 21:42:58.727722   10269 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:42:58.727743   10269 client.go:171] duration metric: took 234.599667ms to LocalClient.Create
	I0702 21:43:00.729889   10269 start.go:128] duration metric: took 2.256297583s to createHost
	I0702 21:43:00.729952   10269 start.go:83] releasing machines lock for "enable-default-cni-967000", held for 2.256427375s
	W0702 21:43:00.730038   10269 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:43:00.738879   10269 out.go:177] * Deleting "enable-default-cni-967000" in qemu2 ...
	W0702 21:43:00.759092   10269 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:43:00.759119   10269 start.go:728] Will try again in 5 seconds ...
	I0702 21:43:05.761213   10269 start.go:360] acquireMachinesLock for enable-default-cni-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:43:05.761682   10269 start.go:364] duration metric: took 357.708µs to acquireMachinesLock for "enable-default-cni-967000"
	I0702 21:43:05.761800   10269 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:43:05.762010   10269 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:43:05.768512   10269 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:43:05.809635   10269 start.go:159] libmachine.API.Create for "enable-default-cni-967000" (driver="qemu2")
	I0702 21:43:05.809688   10269 client.go:168] LocalClient.Create starting
	I0702 21:43:05.809803   10269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:43:05.809876   10269 main.go:141] libmachine: Decoding PEM data...
	I0702 21:43:05.809890   10269 main.go:141] libmachine: Parsing certificate...
	I0702 21:43:05.809952   10269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:43:05.809990   10269 main.go:141] libmachine: Decoding PEM data...
	I0702 21:43:05.809999   10269 main.go:141] libmachine: Parsing certificate...
	I0702 21:43:05.810638   10269 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:43:05.944517   10269 main.go:141] libmachine: Creating SSH key...
	I0702 21:43:06.122610   10269 main.go:141] libmachine: Creating Disk image...
	I0702 21:43:06.122626   10269 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:43:06.122814   10269 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/disk.qcow2
	I0702 21:43:06.132069   10269 main.go:141] libmachine: STDOUT: 
	I0702 21:43:06.132096   10269 main.go:141] libmachine: STDERR: 
	I0702 21:43:06.132150   10269 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/disk.qcow2 +20000M
	I0702 21:43:06.140089   10269 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:43:06.140114   10269 main.go:141] libmachine: STDERR: 
	I0702 21:43:06.140124   10269 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/disk.qcow2
	I0702 21:43:06.140131   10269 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:43:06.140158   10269 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:bf:fd:ff:4c:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/enable-default-cni-967000/disk.qcow2
	I0702 21:43:06.141790   10269 main.go:141] libmachine: STDOUT: 
	I0702 21:43:06.141806   10269 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:43:06.141816   10269 client.go:171] duration metric: took 332.129834ms to LocalClient.Create
	I0702 21:43:08.143963   10269 start.go:128] duration metric: took 2.381967333s to createHost
	I0702 21:43:08.144050   10269 start.go:83] releasing machines lock for "enable-default-cni-967000", held for 2.382394208s
	W0702 21:43:08.144372   10269 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:43:08.153708   10269 out.go:177] 
	W0702 21:43:08.157701   10269 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:43:08.157728   10269 out.go:239] * 
	* 
	W0702 21:43:08.159852   10269 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:43:08.169710   10269 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.845155208s)

                                                
                                                
-- stdout --
	* [bridge-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-967000" primary control-plane node in "bridge-967000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:43:10.321690   10384 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:43:10.321831   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:43:10.321835   10384 out.go:304] Setting ErrFile to fd 2...
	I0702 21:43:10.321838   10384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:43:10.321968   10384 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:43:10.323117   10384 out.go:298] Setting JSON to false
	I0702 21:43:10.339290   10384 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6159,"bootTime":1719975631,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:43:10.339361   10384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:43:10.344491   10384 out.go:177] * [bridge-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:43:10.351469   10384 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:43:10.351512   10384 notify.go:220] Checking for updates...
	I0702 21:43:10.358492   10384 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:43:10.361498   10384 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:43:10.364494   10384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:43:10.367545   10384 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:43:10.370448   10384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:43:10.373764   10384 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:43:10.373834   10384 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:43:10.373880   10384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:43:10.378477   10384 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:43:10.385507   10384 start.go:297] selected driver: qemu2
	I0702 21:43:10.385522   10384 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:43:10.385529   10384 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:43:10.387689   10384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:43:10.390552   10384 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:43:10.392032   10384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:43:10.392078   10384 cni.go:84] Creating CNI manager for "bridge"
	I0702 21:43:10.392082   10384 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:43:10.392124   10384 start.go:340] cluster config:
	{Name:bridge-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:bridge-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:43:10.395801   10384 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:43:10.404508   10384 out.go:177] * Starting "bridge-967000" primary control-plane node in "bridge-967000" cluster
	I0702 21:43:10.408463   10384 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:43:10.408479   10384 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:43:10.408488   10384 cache.go:56] Caching tarball of preloaded images
	I0702 21:43:10.408546   10384 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:43:10.408558   10384 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:43:10.408626   10384 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/bridge-967000/config.json ...
	I0702 21:43:10.408638   10384 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/bridge-967000/config.json: {Name:mk4a548a5ccb099ade4d26d6ea98b010714f183b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:43:10.408848   10384 start.go:360] acquireMachinesLock for bridge-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:43:10.408893   10384 start.go:364] duration metric: took 38.584µs to acquireMachinesLock for "bridge-967000"
	I0702 21:43:10.408905   10384 start.go:93] Provisioning new machine with config: &{Name:bridge-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:bridge-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:43:10.408935   10384 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:43:10.412484   10384 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:43:10.429478   10384 start.go:159] libmachine.API.Create for "bridge-967000" (driver="qemu2")
	I0702 21:43:10.429506   10384 client.go:168] LocalClient.Create starting
	I0702 21:43:10.429580   10384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:43:10.429616   10384 main.go:141] libmachine: Decoding PEM data...
	I0702 21:43:10.429624   10384 main.go:141] libmachine: Parsing certificate...
	I0702 21:43:10.429666   10384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:43:10.429688   10384 main.go:141] libmachine: Decoding PEM data...
	I0702 21:43:10.429697   10384 main.go:141] libmachine: Parsing certificate...
	I0702 21:43:10.430111   10384 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:43:10.558989   10384 main.go:141] libmachine: Creating SSH key...
	I0702 21:43:10.773098   10384 main.go:141] libmachine: Creating Disk image...
	I0702 21:43:10.773107   10384 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:43:10.773326   10384 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/disk.qcow2
	I0702 21:43:10.783031   10384 main.go:141] libmachine: STDOUT: 
	I0702 21:43:10.783050   10384 main.go:141] libmachine: STDERR: 
	I0702 21:43:10.783114   10384 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/disk.qcow2 +20000M
	I0702 21:43:10.791239   10384 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:43:10.791253   10384 main.go:141] libmachine: STDERR: 
	I0702 21:43:10.791274   10384 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/disk.qcow2
	I0702 21:43:10.791279   10384 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:43:10.791310   10384 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:fd:ab:6b:2c:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/disk.qcow2
	I0702 21:43:10.792958   10384 main.go:141] libmachine: STDOUT: 
	I0702 21:43:10.792971   10384 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:43:10.792987   10384 client.go:171] duration metric: took 363.484334ms to LocalClient.Create
	I0702 21:43:12.795409   10384 start.go:128] duration metric: took 2.386473958s to createHost
	I0702 21:43:12.795557   10384 start.go:83] releasing machines lock for "bridge-967000", held for 2.386700791s
	W0702 21:43:12.795616   10384 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:43:12.808902   10384 out.go:177] * Deleting "bridge-967000" in qemu2 ...
	W0702 21:43:12.831308   10384 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:43:12.831342   10384 start.go:728] Will try again in 5 seconds ...
	I0702 21:43:17.833364   10384 start.go:360] acquireMachinesLock for bridge-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:43:17.833585   10384 start.go:364] duration metric: took 184.417µs to acquireMachinesLock for "bridge-967000"
	I0702 21:43:17.833604   10384 start.go:93] Provisioning new machine with config: &{Name:bridge-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:bridge-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:43:17.833700   10384 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:43:17.842939   10384 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:43:17.866573   10384 start.go:159] libmachine.API.Create for "bridge-967000" (driver="qemu2")
	I0702 21:43:17.866617   10384 client.go:168] LocalClient.Create starting
	I0702 21:43:17.866691   10384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:43:17.866735   10384 main.go:141] libmachine: Decoding PEM data...
	I0702 21:43:17.866746   10384 main.go:141] libmachine: Parsing certificate...
	I0702 21:43:17.866789   10384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:43:17.866819   10384 main.go:141] libmachine: Decoding PEM data...
	I0702 21:43:17.866828   10384 main.go:141] libmachine: Parsing certificate...
	I0702 21:43:17.867189   10384 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:43:17.996103   10384 main.go:141] libmachine: Creating SSH key...
	I0702 21:43:18.079870   10384 main.go:141] libmachine: Creating Disk image...
	I0702 21:43:18.079880   10384 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:43:18.080071   10384 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/disk.qcow2
	I0702 21:43:18.089394   10384 main.go:141] libmachine: STDOUT: 
	I0702 21:43:18.089420   10384 main.go:141] libmachine: STDERR: 
	I0702 21:43:18.089466   10384 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/disk.qcow2 +20000M
	I0702 21:43:18.097818   10384 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:43:18.097832   10384 main.go:141] libmachine: STDERR: 
	I0702 21:43:18.097845   10384 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/disk.qcow2
	I0702 21:43:18.097850   10384 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:43:18.097886   10384 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:10:5f:25:aa:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/bridge-967000/disk.qcow2
	I0702 21:43:18.099603   10384 main.go:141] libmachine: STDOUT: 
	I0702 21:43:18.099623   10384 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:43:18.099635   10384 client.go:171] duration metric: took 233.018834ms to LocalClient.Create
	I0702 21:43:20.101702   10384 start.go:128] duration metric: took 2.268028916s to createHost
	I0702 21:43:20.101763   10384 start.go:83] releasing machines lock for "bridge-967000", held for 2.268214125s
	W0702 21:43:20.101991   10384 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:43:20.111409   10384 out.go:177] 
	W0702 21:43:20.117387   10384 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:43:20.117396   10384 out.go:239] * 
	* 
	W0702 21:43:20.118296   10384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:43:20.128353   10384 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.77029375s)

                                                
                                                
-- stdout --
	* [kubenet-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-967000" primary control-plane node in "kubenet-967000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:43:22.317504   10507 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:43:22.317635   10507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:43:22.317640   10507 out.go:304] Setting ErrFile to fd 2...
	I0702 21:43:22.317642   10507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:43:22.317762   10507 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:43:22.318904   10507 out.go:298] Setting JSON to false
	I0702 21:43:22.334647   10507 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6171,"bootTime":1719975631,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:43:22.334733   10507 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:43:22.340094   10507 out.go:177] * [kubenet-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:43:22.346914   10507 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:43:22.346952   10507 notify.go:220] Checking for updates...
	I0702 21:43:22.353886   10507 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:43:22.356934   10507 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:43:22.359966   10507 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:43:22.366926   10507 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:43:22.369952   10507 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:43:22.373338   10507 config.go:182] Loaded profile config "multinode-547000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:43:22.373406   10507 config.go:182] Loaded profile config "stopped-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0702 21:43:22.373454   10507 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:43:22.376878   10507 out.go:177] * Using the qemu2 driver based on user configuration
	I0702 21:43:22.382853   10507 start.go:297] selected driver: qemu2
	I0702 21:43:22.382864   10507 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:43:22.382873   10507 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:43:22.385181   10507 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:43:22.388083   10507 out.go:177] * Automatically selected the socket_vmnet network
	I0702 21:43:22.390949   10507 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0702 21:43:22.390963   10507 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0702 21:43:22.390993   10507 start.go:340] cluster config:
	{Name:kubenet-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubenet-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:43:22.394447   10507 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:43:22.402887   10507 out.go:177] * Starting "kubenet-967000" primary control-plane node in "kubenet-967000" cluster
	I0702 21:43:22.406917   10507 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:43:22.406930   10507 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:43:22.406936   10507 cache.go:56] Caching tarball of preloaded images
	I0702 21:43:22.406995   10507 preload.go:173] Found /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0702 21:43:22.407000   10507 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:43:22.407052   10507 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/kubenet-967000/config.json ...
	I0702 21:43:22.407062   10507 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/kubenet-967000/config.json: {Name:mk7a2dbfc0364738c057b75004ff4918f3262c3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:43:22.407250   10507 start.go:360] acquireMachinesLock for kubenet-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:43:22.407280   10507 start.go:364] duration metric: took 24.541µs to acquireMachinesLock for "kubenet-967000"
	I0702 21:43:22.407291   10507 start.go:93] Provisioning new machine with config: &{Name:kubenet-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kubenet-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:43:22.407318   10507 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:43:22.413806   10507 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:43:22.429030   10507 start.go:159] libmachine.API.Create for "kubenet-967000" (driver="qemu2")
	I0702 21:43:22.429046   10507 client.go:168] LocalClient.Create starting
	I0702 21:43:22.429144   10507 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:43:22.429178   10507 main.go:141] libmachine: Decoding PEM data...
	I0702 21:43:22.429188   10507 main.go:141] libmachine: Parsing certificate...
	I0702 21:43:22.429223   10507 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:43:22.429255   10507 main.go:141] libmachine: Decoding PEM data...
	I0702 21:43:22.429263   10507 main.go:141] libmachine: Parsing certificate...
	I0702 21:43:22.429591   10507 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:43:22.555837   10507 main.go:141] libmachine: Creating SSH key...
	I0702 21:43:22.731525   10507 main.go:141] libmachine: Creating Disk image...
	I0702 21:43:22.731539   10507 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:43:22.731755   10507 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/disk.qcow2
	I0702 21:43:22.741527   10507 main.go:141] libmachine: STDOUT: 
	I0702 21:43:22.741558   10507 main.go:141] libmachine: STDERR: 
	I0702 21:43:22.741612   10507 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/disk.qcow2 +20000M
	I0702 21:43:22.749626   10507 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:43:22.749646   10507 main.go:141] libmachine: STDERR: 
	I0702 21:43:22.749659   10507 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/disk.qcow2
	I0702 21:43:22.749664   10507 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:43:22.749692   10507 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:6e:e2:47:d7:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/disk.qcow2
	I0702 21:43:22.751380   10507 main.go:141] libmachine: STDOUT: 
	I0702 21:43:22.751396   10507 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:43:22.751414   10507 client.go:171] duration metric: took 322.37ms to LocalClient.Create
	I0702 21:43:24.751795   10507 start.go:128] duration metric: took 2.344509875s to createHost
	I0702 21:43:24.751830   10507 start.go:83] releasing machines lock for "kubenet-967000", held for 2.344589459s
	W0702 21:43:24.751887   10507 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:43:24.761010   10507 out.go:177] * Deleting "kubenet-967000" in qemu2 ...
	W0702 21:43:24.778884   10507 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:43:24.778899   10507 start.go:728] Will try again in 5 seconds ...
	I0702 21:43:29.780967   10507 start.go:360] acquireMachinesLock for kubenet-967000: {Name:mkddfec0f74fa72116f1293eb0de0d0178c68d67 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0702 21:43:29.781233   10507 start.go:364] duration metric: took 220.542µs to acquireMachinesLock for "kubenet-967000"
	I0702 21:43:29.781267   10507 start.go:93] Provisioning new machine with config: &{Name:kubenet-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kubenet-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0702 21:43:29.781368   10507 start.go:125] createHost starting for "" (driver="qemu2")
	I0702 21:43:29.784859   10507 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0702 21:43:29.810352   10507 start.go:159] libmachine.API.Create for "kubenet-967000" (driver="qemu2")
	I0702 21:43:29.810391   10507 client.go:168] LocalClient.Create starting
	I0702 21:43:29.810471   10507 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/ca.pem
	I0702 21:43:29.810520   10507 main.go:141] libmachine: Decoding PEM data...
	I0702 21:43:29.810534   10507 main.go:141] libmachine: Parsing certificate...
	I0702 21:43:29.810582   10507 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19184-6175/.minikube/certs/cert.pem
	I0702 21:43:29.810616   10507 main.go:141] libmachine: Decoding PEM data...
	I0702 21:43:29.810626   10507 main.go:141] libmachine: Parsing certificate...
	I0702 21:43:29.810992   10507 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso...
	I0702 21:43:29.939204   10507 main.go:141] libmachine: Creating SSH key...
	I0702 21:43:30.002854   10507 main.go:141] libmachine: Creating Disk image...
	I0702 21:43:30.002860   10507 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0702 21:43:30.003029   10507 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/disk.qcow2
	I0702 21:43:30.012007   10507 main.go:141] libmachine: STDOUT: 
	I0702 21:43:30.012029   10507 main.go:141] libmachine: STDERR: 
	I0702 21:43:30.012085   10507 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/disk.qcow2 +20000M
	I0702 21:43:30.020240   10507 main.go:141] libmachine: STDOUT: Image resized.
	
	I0702 21:43:30.020254   10507 main.go:141] libmachine: STDERR: 
	I0702 21:43:30.020270   10507 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/disk.qcow2
	I0702 21:43:30.020274   10507 main.go:141] libmachine: Starting QEMU VM...
	I0702 21:43:30.020309   10507 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:3f:be:48:fb:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19184-6175/.minikube/machines/kubenet-967000/disk.qcow2
	I0702 21:43:30.022130   10507 main.go:141] libmachine: STDOUT: 
	I0702 21:43:30.022147   10507 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0702 21:43:30.022159   10507 client.go:171] duration metric: took 211.767291ms to LocalClient.Create
	I0702 21:43:32.024341   10507 start.go:128] duration metric: took 2.242983916s to createHost
	I0702 21:43:32.024426   10507 start.go:83] releasing machines lock for "kubenet-967000", held for 2.24321825s
	W0702 21:43:32.024762   10507 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0702 21:43:32.032398   10507 out.go:177] 
	W0702 21:43:32.037460   10507 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0702 21:43:32.037485   10507 out.go:239] * 
	* 
	W0702 21:43:32.040018   10507 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:43:32.048398   10507 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.77s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.2/json-events 10.19
13 TestDownloadOnly/v1.30.2/preload-exists 0
16 TestDownloadOnly/v1.30.2/kubectl 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.08
18 TestDownloadOnly/v1.30.2/DeleteAll 0.11
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.28
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 9.39
39 TestErrorSpam/start 0.38
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 9.29
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.73
55 TestFunctional/serial/CacheCmd/cache/add_local 1.06
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.27
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.09
93 TestFunctional/parallel/License 0.25
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.32
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
126 TestFunctional/parallel/ProfileCmd/profile_list 0.08
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.1
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 1.76
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.2
193 TestMainNoArgs 0.03
237 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
241 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
242 TestNoKubernetes/serial/ProfileList 15.82
243 TestNoKubernetes/serial/Stop 3.75
245 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
259 TestStoppedBinaryUpgrade/Setup 1.01
265 TestStartStop/group/old-k8s-version/serial/Stop 2.86
266 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
276 TestStartStop/group/no-preload/serial/Stop 2.12
277 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
287 TestStartStop/group/embed-certs/serial/Stop 1.73
288 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
298 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.99
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
307 TestStartStop/group/newest-cni/serial/DeployApp 0
308 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.05
309 TestStartStop/group/newest-cni/serial/Stop 3.22
310 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.1
312 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
313 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
325 TestStoppedBinaryUpgrade/MinikubeLogs 0.66
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-617000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-617000: exit status 85 (94.867333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-617000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |          |
	|         | -p download-only-617000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/02 21:18:31
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0702 21:18:31.387813    6671 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:18:31.387959    6671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:18:31.387963    6671 out.go:304] Setting ErrFile to fd 2...
	I0702 21:18:31.387966    6671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:18:31.388077    6671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	W0702 21:18:31.388157    6671 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19184-6175/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19184-6175/.minikube/config/config.json: no such file or directory
	I0702 21:18:31.389507    6671 out.go:298] Setting JSON to true
	I0702 21:18:31.407613    6671 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4680,"bootTime":1719975631,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:18:31.407713    6671 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:18:31.412600    6671 out.go:97] [download-only-617000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:18:31.412759    6671 notify.go:220] Checking for updates...
	W0702 21:18:31.412793    6671 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball: no such file or directory
	I0702 21:18:31.415490    6671 out.go:169] MINIKUBE_LOCATION=19184
	I0702 21:18:31.418449    6671 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:18:31.422520    6671 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:18:31.425891    6671 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:18:31.428506    6671 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	W0702 21:18:31.435559    6671 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0702 21:18:31.435833    6671 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:18:31.439442    6671 out.go:97] Using the qemu2 driver based on user configuration
	I0702 21:18:31.439460    6671 start.go:297] selected driver: qemu2
	I0702 21:18:31.439486    6671 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:18:31.439539    6671 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:18:31.442538    6671 out.go:169] Automatically selected the socket_vmnet network
	I0702 21:18:31.448003    6671 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0702 21:18:31.448102    6671 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0702 21:18:31.448161    6671 cni.go:84] Creating CNI manager for ""
	I0702 21:18:31.448179    6671 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0702 21:18:31.448236    6671 start.go:340] cluster config:
	{Name:download-only-617000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:18:31.452181    6671 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:18:31.457308    6671 out.go:97] Downloading VM boot image ...
	I0702 21:18:31.457341    6671 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/iso/arm64/minikube-v1.33.1-1719929171-19175-arm64.iso
	I0702 21:18:36.669440    6671 out.go:97] Starting "download-only-617000" primary control-plane node in "download-only-617000" cluster
	I0702 21:18:36.669472    6671 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0702 21:18:36.728897    6671 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0702 21:18:36.728905    6671 cache.go:56] Caching tarball of preloaded images
	I0702 21:18:36.729071    6671 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0702 21:18:36.735975    6671 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0702 21:18:36.735980    6671 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0702 21:18:36.810092    6671 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0702 21:18:44.002497    6671 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0702 21:18:44.002657    6671 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0702 21:18:44.698814    6671 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0702 21:18:44.699018    6671 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/download-only-617000/config.json ...
	I0702 21:18:44.699036    6671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/download-only-617000/config.json: {Name:mke1e04db6842554434f52e29a26f088d8c718f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:18:44.700151    6671 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0702 21:18:44.700348    6671 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0702 21:18:45.085326    6671 out.go:169] 
	W0702 21:18:45.088332    6671 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19184-6175/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1084a1a20 0x1084a1a20 0x1084a1a20 0x1084a1a20 0x1084a1a20 0x1084a1a20 0x1084a1a20] Decompressors:map[bz2:0x1400048f690 gz:0x1400048f698 tar:0x1400048f640 tar.bz2:0x1400048f650 tar.gz:0x1400048f660 tar.xz:0x1400048f670 tar.zst:0x1400048f680 tbz2:0x1400048f650 tgz:0x1400048f660 txz:0x1400048f670 tzst:0x1400048f680 xz:0x1400048f6a0 zip:0x1400048f6b0 zst:0x1400048f6a8] Getters:map[file:0x140013885e0 http:0x14000884230 https:0x14000884280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0702 21:18:45.088356    6671 out_reason.go:110] 
	W0702 21:18:45.098323    6671 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0702 21:18:45.102216    6671 out.go:169] 
	
	
	* The control-plane node download-only-617000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-617000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-617000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (10.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-214000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-214000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=qemu2 : (10.190792542s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (10.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
--- PASS: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-214000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-214000: exit status 85 (79.224333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-617000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
	|         | -p download-only-617000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
	| delete  | -p download-only-617000        | download-only-617000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT | 02 Jul 24 21:18 PDT |
	| start   | -o=json --download-only        | download-only-214000 | jenkins | v1.33.1 | 02 Jul 24 21:18 PDT |                     |
	|         | -p download-only-214000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/02 21:18:45
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0702 21:18:45.518101    6699 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:18:45.518267    6699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:18:45.518273    6699 out.go:304] Setting ErrFile to fd 2...
	I0702 21:18:45.518276    6699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:18:45.518420    6699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:18:45.519471    6699 out.go:298] Setting JSON to true
	I0702 21:18:45.535407    6699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4694,"bootTime":1719975631,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:18:45.535474    6699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:18:45.540445    6699 out.go:97] [download-only-214000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:18:45.540551    6699 notify.go:220] Checking for updates...
	I0702 21:18:45.543192    6699 out.go:169] MINIKUBE_LOCATION=19184
	I0702 21:18:45.546326    6699 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:18:45.550291    6699 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:18:45.553312    6699 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:18:45.556284    6699 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	W0702 21:18:45.560749    6699 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0702 21:18:45.560930    6699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:18:45.564280    6699 out.go:97] Using the qemu2 driver based on user configuration
	I0702 21:18:45.564287    6699 start.go:297] selected driver: qemu2
	I0702 21:18:45.564292    6699 start.go:901] validating driver "qemu2" against <nil>
	I0702 21:18:45.564327    6699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0702 21:18:45.567274    6699 out.go:169] Automatically selected the socket_vmnet network
	I0702 21:18:45.572516    6699 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0702 21:18:45.572632    6699 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0702 21:18:45.572649    6699 cni.go:84] Creating CNI manager for ""
	I0702 21:18:45.572658    6699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0702 21:18:45.572665    6699 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0702 21:18:45.572710    6699 start.go:340] cluster config:
	{Name:download-only-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:18:45.576084    6699 iso.go:125] acquiring lock: {Name:mkfc994e1133a3143403170143dc19a0a4089be1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0702 21:18:45.579341    6699 out.go:97] Starting "download-only-214000" primary control-plane node in "download-only-214000" cluster
	I0702 21:18:45.579349    6699 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:18:45.637500    6699 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:18:45.637518    6699 cache.go:56] Caching tarball of preloaded images
	I0702 21:18:45.637716    6699 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:18:45.642826    6699 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0702 21:18:45.642838    6699 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0702 21:18:45.724449    6699 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4?checksum=md5:3bd37d965c85173ac77cdcc664938efd -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0702 21:18:50.103623    6699 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0702 21:18:50.103806    6699 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0702 21:18:50.647886    6699 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0702 21:18:50.648079    6699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/download-only-214000/config.json ...
	I0702 21:18:50.648095    6699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/download-only-214000/config.json: {Name:mk7ea59a819a17f246e69de8cdbd9a00221ecfe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0702 21:18:50.649431    6699 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0702 21:18:50.649566    6699 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19184-6175/.minikube/cache/darwin/arm64/v1.30.2/kubectl
	
	
	* The control-plane node download-only-214000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-214000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-214000
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.28s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-608000 --alsologtostderr --binary-mirror http://127.0.0.1:50999 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-608000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-608000
--- PASS: TestBinaryMirror (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-066000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-066000: exit status 85 (56.451167ms)

                                                
                                                
-- stdout --
	* Profile "addons-066000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-066000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-066000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-066000: exit status 85 (60.258833ms)

                                                
                                                
-- stdout --
	* Profile "addons-066000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-066000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.39s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.39s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 status: exit status 7 (30.822542ms)

                                                
                                                
-- stdout --
	nospam-331000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 status: exit status 7 (29.988625ms)

                                                
                                                
-- stdout --
	nospam-331000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 status: exit status 7 (29.855292ms)

                                                
                                                
-- stdout --
	nospam-331000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 pause: exit status 83 (40.569208ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-331000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-331000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 pause: exit status 83 (39.780417ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-331000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-331000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 pause: exit status 83 (39.846ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-331000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-331000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 unpause: exit status 83 (37.712792ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-331000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-331000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 unpause: exit status 83 (38.912667ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-331000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-331000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 unpause: exit status 83 (39.731916ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-331000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-331000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (9.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 stop: (3.230448583s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 stop: (2.848942083s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-331000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-331000 stop: (3.209835292s)
--- PASS: TestErrorSpam/stop (9.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19184-6175/.minikube/files/etc/test/nested/copy/6669/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-250000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1653334758/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 cache add minikube-local-cache-test:functional-250000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 cache delete minikube-local-cache-test:functional-250000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-250000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 config get cpus: exit status 14 (29.919583ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 config get cpus: exit status 14 (32.534458ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-250000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-250000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (158.248666ms)

                                                
                                                
-- stdout --
	* [functional-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:20:35.103868    7277 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:20:35.104037    7277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:20:35.104042    7277 out.go:304] Setting ErrFile to fd 2...
	I0702 21:20:35.104045    7277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:20:35.104233    7277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:20:35.105444    7277 out.go:298] Setting JSON to false
	I0702 21:20:35.125598    7277 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4804,"bootTime":1719975631,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:20:35.125663    7277 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:20:35.130495    7277 out.go:177] * [functional-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0702 21:20:35.136460    7277 notify.go:220] Checking for updates...
	I0702 21:20:35.140333    7277 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:20:35.144385    7277 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:20:35.147335    7277 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:20:35.150388    7277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:20:35.153359    7277 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:20:35.156331    7277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:20:35.159708    7277 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:20:35.160007    7277 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:20:35.164338    7277 out.go:177] * Using the qemu2 driver based on existing profile
	I0702 21:20:35.167368    7277 start.go:297] selected driver: qemu2
	I0702 21:20:35.167381    7277 start.go:901] validating driver "qemu2" against &{Name:functional-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-250000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:20:35.167442    7277 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:20:35.174364    7277 out.go:177] 
	W0702 21:20:35.178181    7277 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0702 21:20:35.182324    7277 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-250000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-250000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-250000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (105.844375ms)

                                                
                                                
-- stdout --
	* [functional-250000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0702 21:20:35.327030    7288 out.go:291] Setting OutFile to fd 1 ...
	I0702 21:20:35.327139    7288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:20:35.327144    7288 out.go:304] Setting ErrFile to fd 2...
	I0702 21:20:35.327146    7288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0702 21:20:35.327290    7288 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19184-6175/.minikube/bin
	I0702 21:20:35.328752    7288 out.go:298] Setting JSON to false
	I0702 21:20:35.345252    7288 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4804,"bootTime":1719975631,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0702 21:20:35.345328    7288 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0702 21:20:35.350403    7288 out.go:177] * [functional-250000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0702 21:20:35.357394    7288 out.go:177]   - MINIKUBE_LOCATION=19184
	I0702 21:20:35.357459    7288 notify.go:220] Checking for updates...
	I0702 21:20:35.364418    7288 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	I0702 21:20:35.367362    7288 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0702 21:20:35.370370    7288 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0702 21:20:35.373421    7288 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	I0702 21:20:35.374708    7288 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0702 21:20:35.377753    7288 config.go:182] Loaded profile config "functional-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0702 21:20:35.378022    7288 driver.go:392] Setting default libvirt URI to qemu:///system
	I0702 21:20:35.382318    7288 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0702 21:20:35.387341    7288 start.go:297] selected driver: qemu2
	I0702 21:20:35.387351    7288 start.go:901] validating driver "qemu2" against &{Name:functional-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-250000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0702 21:20:35.387407    7288 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0702 21:20:35.393353    7288 out.go:177] 
	W0702 21:20:35.397232    7288 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0702 21:20:35.401387    7288 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.30869375s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-250000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-250000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image rm gcr.io/google-containers/addon-resizer:functional-250000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-250000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 image save --daemon gcr.io/google-containers/addon-resizer:functional-250000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-250000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "46.967958ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.767458ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "46.929375ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.794959ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.01304675s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-250000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-250000
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-250000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-250000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-744000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-744000 --output=json --user=testUser: (1.757138291s)
--- PASS: TestJSONOutput/stop/Command (1.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-880000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-880000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.842584ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cdbdf8c1-2f19-42ee-a3f1-6d1eefa06244","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-880000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"718a8d58-758e-4380-8a62-918d05c37949","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19184"}}
	{"specversion":"1.0","id":"8b208e03-9cc4-49fa-a65f-23be693ae19f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig"}}
	{"specversion":"1.0","id":"33a0efbd-0200-48b5-ad68-656415d82afe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e4694b5a-06db-4319-988d-f3c11df400a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b56170af-4987-4da7-b0f9-bd7211288316","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube"}}
	{"specversion":"1.0","id":"8d6aab09-0373-42c7-9d8a-61a854c284c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"02ce392b-44fc-437f-8dae-7fe69e0f8716","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-880000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-880000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-934000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.029709ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19184
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19184-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19184-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-934000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-934000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (46.887583ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-934000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-934000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.705674042s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-934000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-934000: (3.745378083s)
--- PASS: TestNoKubernetes/serial/Stop (3.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-934000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-934000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.08825ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-934000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-934000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-152000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-152000 --alsologtostderr -v=3: (2.8588165s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-152000 -n old-k8s-version-152000: exit status 7 (60.872792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-152000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-639000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-639000 --alsologtostderr -v=3: (2.115311792s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-639000 -n no-preload-639000: exit status 7 (61.317458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-639000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-167000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-167000 --alsologtostderr -v=3: (1.72519275s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-167000 -n embed-certs-167000: exit status 7 (61.9055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-167000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-265000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-265000 --alsologtostderr -v=3: (1.986191209s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-265000 -n default-k8s-diff-port-265000: exit status 7 (54.992708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-265000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-777000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-777000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-777000 --alsologtostderr -v=3: (3.218828875s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-777000 -n newest-cni-777000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-777000 -n newest-cni-777000: exit status 7 (41.085458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-777000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-896000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-250000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4125254230/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1719980400184192000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4125254230/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1719980400184192000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4125254230/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1719980400184192000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4125254230/001/test-1719980400184192000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (54.533917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.105ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.196875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.972375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.657208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.677833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.401792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.586875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "sudo umount -f /mount-9p": exit status 83 (44.826791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-250000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-250000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4125254230/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (13.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-250000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3178806653/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (59.45725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.797459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.972833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.027833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.93525ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.878292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.580083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "sudo umount -f /mount-9p": exit status 83 (45.604583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-250000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-250000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3178806653/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (9.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-250000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2657301408/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-250000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2657301408/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-250000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2657301408/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1: exit status 83 (77.42025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1: exit status 83 (83.193916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1: exit status 83 (83.001333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1: exit status 83 (85.592667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1: exit status 83 (84.709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1: exit status 83 (83.037292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-250000 ssh "findmnt -T" /mount1: exit status 83 (85.008542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-250000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-250000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2657301408/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-250000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2657301408/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-250000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2657301408/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (9.97s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-287000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-287000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-967000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/jenkins/minikube-integration/19184-6175/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Jul 2024 21:31:23 PDT
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://10.0.2.15:8443
name: running-upgrade-908000
contexts:
- context:
cluster: running-upgrade-908000
user: running-upgrade-908000
name: running-upgrade-908000
current-context: running-upgrade-908000
kind: Config
preferences: {}
users:
- name: running-upgrade-908000
user:
client-certificate: /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/client.crt
client-key: /Users/jenkins/minikube-integration/19184-6175/.minikube/profiles/running-upgrade-908000/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                
----------------------- debugLogs end: cilium-967000 [took: 2.231760292s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-967000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-967000
--- SKIP: TestNetworkPlugins/group/cilium (2.34s)

                                                
                                    
Copied to clipboard